WorldWideScience

Sample records for imaging experiments based

  1. Developing students’ ideas about lens imaging: teaching experiments with an image-based approach

    Science.gov (United States)

    Grusche, Sascha

    2017-07-01

    Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.

  2. Effect of Reading Ability and Internet Experience on Keyword-Based Image Search

    Science.gov (United States)

    Lei, Pei-Lan; Lin, Sunny S. J.; Sun, Chuen-Tsai

    2013-01-01

    Image searches are now crucial for obtaining information, constructing knowledge, and building successful educational outcomes. We investigated how reading ability and Internet experience influence keyword-based image search behaviors and performance. We categorized 58 junior-high-school students into four groups of high/low reading ability and…

  3. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    International Nuclear Information System (INIS)

    Bottigli, U.; Brunetti, A.; Golosio, B.; Oliva, P.; Stumbo, S.; Vincze, L.; Randaccio, P.; Bleuet, P.; Simionovici, A.; Somogyi, A.

    2004-01-01

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed

  4. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bottigli, U. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Sezione INFN di Cagliari (Italy); Brunetti, A. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Golosio, B. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy) and Sezione INFN di Cagliari (Italy)]. E-mail: golosio@uniss.it; Oliva, P. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Stumbo, S. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Vincze, L. [Department of Chemistry, University of Antwerp (Belgium); Randaccio, P. [Dipartimento di Fisica dell' Universita di Cagliari and Sezione INFN di Cagliari (Italy); Bleuet, P. [European Synchrotron Radiation Facility, Grenoble (France); Simionovici, A. [European Synchrotron Radiation Facility, Grenoble (France); Somogyi, A. [European Synchrotron Radiation Facility, Grenoble (France)

    2004-10-08

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed.

  5. ROV Based Underwater Blurred Image Restoration

    Institute of Scientific and Technical Information of China (English)

    LIU Zhishen; DING Tianfu; WANG Gang

    2003-01-01

    In this paper, we present a method of ROV based image processing to restore underwater blurry images from the theory of light and image transmission in the sea. Computer is used to simulate the maximum detection range of the ROV under different water body conditions. The receiving irradiance of the video camera at different detection ranges is also calculated. The ROV's detection performance under different water body conditions is given by simulation. We restore the underwater blurry images using the Wiener filter based on the simulation. The Wiener filter is shown to be a simple useful method for underwater image restoration in the ROV underwater experiments. We also present examples of restored images of an underwater standard target taken by the video camera in these experiments.

  6. Pictures, images, and recollective experience.

    Science.gov (United States)

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  7. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  8. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  9. NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

    Directory of Open Access Journals (Sweden)

    Hiren Mewada

    2010-08-01

    Full Text Available Because of cross-disciplinary nature, Active Contour modeling techniques have been utilized extensively for the image segmentation. In traditional active contour based segmentation techniques based on level set methods, the energy functions are defined based on the intensity gradient. This makes them highly sensitive to the situation where the underlying image content is characterized by image nonhomogeneities due to illumination and contrast condition. This is the most difficult problem to make them as fully automatic image segmentation techniques. This paper introduces one of the approaches based on image enhancement to this problem. The enhanced image is obtained using NonSubsampled Contourlet Transform, which improves the edges strengths in the direction where the illumination is not proper and then active contour model based on level set technique is utilized to segment the object. Experiment results demonstrate that proposed method can be utilized along with existing active contour model based segmentation method under situation characterized by intensity non-homogeneity to make them fully automatic.

  10. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  11. Destination visual image and expectation of experiences

    DEFF Research Database (Denmark)

    Ye, H.; Tussyadiah, Iis

    2011-01-01

    A unique experience is the essence of tourism sought by tourists. The most effective way to communicate the notion of a tourism experience at a destination is to provide visual cues that stimulate the imagination and connect with potential tourists in a personal way. This study aims...... at understanding how a visual image is relevant to the expectation of experiences by deconstructing images of a destination and interpreting visitors' perceptions of these images and the experiences associated with them. The results suggest that tourists with different understandings of desirable experiences found...

  12. Preliminary Experience with Small Animal SPECT Imaging on Clinical Gamma Cameras

    Directory of Open Access Journals (Sweden)

    P. Aguiar

    2014-01-01

    Full Text Available The traditional lack of techniques suitable for in vivo imaging has induced a great interest in molecular imaging for preclinical research. Nevertheless, its use spreads slowly due to the difficulties in justifying the high cost of the current dedicated preclinical scanners. An alternative for lowering the costs is to repurpose old clinical gamma cameras to be used for preclinical imaging. In this paper we assess the performance of a portable device, that is, working coupled to a single-head clinical gamma camera, and we present our preliminary experience in several small animal applications. Our findings, based on phantom experiments and animal studies, provided an image quality, in terms of contrast-noise trade-off, comparable to dedicated preclinical pinhole-based scanners. We feel that our portable device offers an opportunity for recycling the widespread availability of clinical gamma cameras in nuclear medicine departments to be used in small animal SPECT imaging and we hope that it can contribute to spreading the use of preclinical imaging within institutions on tight budgets.

  13. Holography Experiments on Optical Imaging.

    Science.gov (United States)

    Bonczak, B.; Dabrowski, J.

    1979-01-01

    Describes experiments intended to produce a better understanding of the holographic method of producing images and optical imaging by other optical systems. Application of holography to teaching physics courses is considered. (Author/SA)

  14. First Human Experience with Directly Image-able Iodinated Embolization Microbeads

    Energy Technology Data Exchange (ETDEWEB)

    Levy, Elliot B., E-mail: levyeb@cc.nih.gov; Krishnasamy, Venkatesh P. [National Institutes of Health, Center for Interventional Oncology (United States); Lewis, Andrew L.; Willis, Sean; Macfarlane, Chelsea [Biocompatibles, UK Ltd, A BTG International Group Company (United Kingdom); Anderson, Victoria [National Institutes of Health, Center for Interventional Oncology (United States); Bom, Imramsjah MJ van der [Clinical Science IGT Systems North & Latin America, Philips, Philips, Image Guided Interventions (United States); Radaelli, Alessandro [Image-Guided Therapy Systems, Philips, Philips, Image Guided Interventions (Netherlands); Dreher, Matthew R. [Biocompatibles, UK Ltd, A BTG International Group Company (United Kingdom); Sharma, Karun V. [Children’s National Medical Center (United States); Negussie, Ayele; Mikhail, Andrew S. [National Institutes of Health, Center for Interventional Oncology (United States); Geschwind, Jean-Francois H. [Department of Radiology and Biomedical Imaging (United States); Wood, Bradford J. [National Institutes of Health, Center for Interventional Oncology (United States)

    2016-08-15

    PurposeTo describe first clinical experience with a directly image-able, inherently radio-opaque microspherical embolic agent for transarterial embolization of liver tumors.MethodologyLC Bead LUMI™ is a new product based upon sulfonate-modified polyvinyl alcohol hydrogel microbeads with covalently bound iodine (~260 mg I/ml). 70–150 μ LC Bead LUMI™ iodinated microbeads were injected selectively via a 2.8 Fr microcatheter to near complete flow stasis into hepatic arteries in three patients with hepatocellular carcinoma, carcinoid, or neuroendocrine tumor. A custom imaging platform tuned for LC LUMI™ microbead conspicuity using a cone beam CT (CBCT)/angiographic C-arm system (Allura Clarity FD20, Philips) was used along with CBCT embolization treatment planning software (EmboGuide, Philips).ResultsLC Bead LUMI™ image-able microbeads were easily delivered and monitored during the procedure using fluoroscopy, single-shot radiography (SSD), digital subtraction angiography (DSA), dual-phase enhanced and unenhanced CBCT, and unenhanced conventional CT obtained 48 h after the procedure. Intra-procedural imaging demonstrated tumor at risk for potential under-treatment, defined as paucity of image-able microbeads within a portion of the tumor which was confirmed at 48 h CT imaging. Fusion of pre- and post-embolization CBCT identified vessels without beads that corresponded to enhancing tumor tissue in the same location on follow-up imaging (48 h post).ConclusionLC Bead LUMI™ image-able microbeads provide real-time feedback and geographic localization of treatment in real time during treatment. The distribution and density of image-able beads within a tumor need further evaluation as an additional endpoint for embolization.

  15. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  16. Image denoising based on noise detection

    Science.gov (United States)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  17. Model-based microwave image reconstruction: simulations and experiments

    International Nuclear Information System (INIS)

    Ciocan, Razvan; Jiang Huabei

    2004-01-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data

  18. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    Directory of Open Access Journals (Sweden)

    Castillo Atoche A

    2010-01-01

    Full Text Available A new aggregated Hardware/Software (HW/SW codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS data based on the concept of descriptive experiment design regularization (DEDR is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  19. Patient-directed Internet-based Medical Image Exchange: Experience from an Initial Multicenter Implementation.

    Science.gov (United States)

    Greco, Giampaolo; Patel, Anand S; Lewis, Sara C; Shi, Wei; Rasul, Rehana; Torosyan, Mary; Erickson, Bradley J; Hiremath, Atheeth; Moskowitz, Alan J; Tellis, Wyatt M; Siegel, Eliot L; Arenson, Ronald L; Mendelson, David S

    2016-02-01

    Inefficient transfer of personal health records among providers negatively impacts quality of health care and increases cost. This multicenter study evaluates the implementation of the first Internet-based image-sharing system that gives patients ownership and control of their imaging exams, including assessment of patient satisfaction. Patients receiving any medical imaging exams in four academic centers were eligible to have images uploaded into an online, Internet-based personal health record. Satisfaction surveys were provided during recruitment with questions on ease of use, privacy and security, and timeliness of access to images. Responses were rated on a five-point scale and compared using logistic regression and McNemar's test. A total of 2562 patients enrolled from July 2012 to August 2013. The median number of imaging exams uploaded per patient was 5. Most commonly, exams were plain X-rays (34.7%), computed tomography (25.7%), and magnetic resonance imaging (16.1%). Of 502 (19.6%) patient surveys returned, 448 indicated the method of image sharing (Internet, compact discs [CDs], both, other). Nearly all patients (96.5%) responded favorably to having direct access to images, and 78% reported viewing their medical images independently. There was no difference between Internet and CD users in satisfaction with privacy and security and timeliness of access to medical images. A greater percentage of Internet users compared to CD users reported access without difficulty (88.3% vs. 77.5%, P Internet-based image-sharing system is feasible and surpasses the use of CDs with respect to accessibility of imaging exams while generating similar satisfaction with respect to privacy. Copyright © 2015 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  1. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  2. Comparisons of three alternative breast modalities in a common phantom imaging experiment

    International Nuclear Information System (INIS)

    Li Dun; Meaney, Paul M.; Tosteson, Tor D.; Jiang Shudong; Kerner, Todd E.; McBride, Troy O.; Pogue, Brian W.; Hartov, Alexander; Paulsen, Keith D.

    2003-01-01

    Four model-based imaging systems are currently being developed for breast cancer detection at Dartmouth College. A potential advantage of multimodality imaging is the prospect of combining information collected from each system to provide a more complete diagnostic tool that covers the full range of the patient and pathology spectra. In this paper it is shown through common phantom experiments on three of these imaging systems that it was possible to correlate different types of image information to potentially improve the reliability of tumor detection. Imaging experiments were conducted with common phantoms which mimic both dielectric and optical properties of the human breast. Cross modality comparison was investigated through a statistical study based on the repeated data sets of reconstructed parameters for each modality. The system standard error between all methods was generally less than 10% and the correlation coefficient across modalities ranged from 0.68 to 0.91. Future work includes the minimization of bias (artifacts) on the periphery of electrical impedance spectroscopy images to improve cross modality correlation and implementation of the multimodality diagnosis for breast cancer detection

  3. Clinical Experiences With Onboard Imager KV Images for Linear Accelerator-Based Stereotactic Radiosurgery and Radiotherapy Setup

    International Nuclear Information System (INIS)

    Hong, Linda X.; Chen, Chin C.; Garg, Madhur; Yaparpalvi, Ravindra; Mah, Dennis

    2009-01-01

    Purpose: To report our clinical experiences with on-board imager (OBI) kV image verification for cranial stereotactic radiosurgery (SRS) and radiotherapy (SRT) treatments. Methods and Materials: Between January 2007 and May 2008, 42 patients (57 lesions) were treated with SRS with head frame immobilization and 13 patients (14 lesions) were treated with SRT with face mask immobilization at our institution. No margin was added to the gross tumor for SRS patients, and a 3-mm three-dimensional margin was added to the gross tumor to create the planning target volume for SRT patients. After localizing the patient with stereotactic target positioner (TaPo), orthogonal kV images using OBI were taken and fused to planning digital reconstructed radiographs. Suggested couch shifts in vertical, longitudinal, and lateral directions were recorded. kV images were also taken immediately after treatment for 21 SRS patients and on a weekly basis for 6 SRT patients to assess any intrafraction changes. Results: For SRS patients, 57 pretreatment kV images were evaluated and the suggested shifts were all within 1 mm in any direction (i.e., within the accuracy of image fusion). For SRT patients, the suggested shifts were out of the 3-mm tolerance for 31 of 309 setups. Intrafraction motions were detected in 3 SRT patients. Conclusions: kV imaging provided a useful tool for SRS or SRT setups. For SRS setup with head frame, it provides radiographic confirmation of localization using the stereotactic target positioner. For SRT with mask, a 3-mm margin is adequate and feasible for routine setup when TaPo is combined with kV imaging

  4. CLASSIFICATION OF CROP-SHELTER COVERAGE BY RGB AERIAL IMAGES: A COMPENDIUM OF EXPERIENCES AND FINDINGS

    Directory of Open Access Journals (Sweden)

    Claudia Arcidiacono

    2010-09-01

    Full Text Available Image processing is a powerful tool apt to perform selective data extraction from high-content images. In agricultural studies, image processing has been applied to different scopes, among them the classification of crop shelters has been recently considered especially in areas where there is a lack of public control in the building activity. The application of image processing to crop-shelter feature recognition make it possible to automatically produce thematic maps that constitute a basic knowledge for local authorities to cope with environmental problems and for technicians to be used in their planning activity. This paper reviews the authors’ experience in the definition of methodologies, based on the main image processing methods, for crop-shelter feature extraction from aerial digital images. Some experiences of pixel-based and object-oriented methods are described and discussed. The results show that the methodology based on object-oriented methods improves crop-shelter classification and reduces computational time, compared to pixel-based methodologies.

  5. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  6. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  7. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  8. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  9. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  10. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  11. Pixel extraction based integral imaging with controllable viewing direction

    International Nuclear Information System (INIS)

    Ji, Chao-Chao; Deng, Huan; Wang, Qiong-Hua

    2012-01-01

    We propose pixel extraction based integral imaging with a controllable viewing direction. The proposed integral imaging can provide viewers three-dimensional (3D) images in a very small viewing angle. The viewing angle and the viewing direction of the reconstructed 3D images are controlled by the pixels extracted from an elemental image array. Theoretical analysis and a 3D display experiment of the viewing direction controllable integral imaging are carried out. The experimental results verify the correctness of the theory. A 3D display based on the integral imaging can protect the viewer’s privacy and has huge potential for a television to show multiple 3D programs at the same time. (paper)

  12. Portable Imaging Polarimeter and Imaging Experiments; TOPICAL

    International Nuclear Information System (INIS)

    PHIPPS, GARY S.; KEMME, SHANALYN A.; SWEATT, WILLIAM C.; DESCOUR, M.R.; GARCIA, J.P.; DERENIAK, E.L.

    1999-01-01

    Polarimetry is the method of recording the state of polarization of light. Imaging polarimetry extends this method to recording the spatially resolved state of polarization within a scene. Imaging-polarimetry data have the potential to improve the detection of manmade objects in natural backgrounds. We have constructed a midwave infrared complete imaging polarimeter consisting of a fixed wire-grid polarizer and rotating form-birefringent retarder. The retardance and the orientation angles of the retarder were optimized to minimize the sensitivity of the instrument to noise in the measurements. The optimal retardance was found to be 132(degree) rather than the typical 90(degree). The complete imaging polarimeter utilized a liquid-nitrogen cooled PtSi camera. The fixed wire-grid polarizer was located at the cold stop inside the camera dewar. The complete imaging polarimeter was operated in the 4.42-5(micro)m spectral range. A series of imaging experiments was performed using as targets a surface of water, an automobile, and an aircraft. Further analysis of the polarization measurements revealed that in all three cases the magnitude of circular polarization was comparable to the noise in the calculated Stokes-vector components

  13. A Subdivision-Based Representation for Vector Image Editing.

    Science.gov (United States)

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  14. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  15. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  16. Spectroscopic Needs for Imaging Dark Energy Experiments

    International Nuclear Information System (INIS)

    Newman, Jeffrey A.; Abate, Alexandra; Abdalla, Filipe B.; Allam, Sahar; Allen, Steven W.; Ansari, Reza; Bailey, Stephen; Barkhouse, Wayne A.; Beers, Timothy C.; Blanton, Michael R.; Brodwin, Mark; Brownstein, Joel R.; Brunner, Robert J.; Carrasco-Kind, Matias; Cervantes-Cota, Jorge; Chisari, Nora Elisa; Colless, Matthew; Coupon, Jean; Cunha, Carlos E.; Frye, Brenda L.; Gawiser, Eric J.; Gehrels, Neil; Grady, Kevin; Hagen, Alex; Hall, Patrick B.; Hearin, Andrew P.; Hildebrandt, Hendrik; Hirata, Christopher M.; Ho, Shirley; Huterer, Dragan; Ivezic, Zeljko; Kneib, Jean-Paul; Kruk, Jeffrey W.; Lahav, Ofer; Mandelbaum, Rachel; Matthews, Daniel J.; Miquel, Ramon; Moniez, Marc; Moos, H. W.; Moustakas, John; Papovich, Casey; Peacock, John A.; Rhodes, Jason; Ricol, Jean-Stepane; Sadeh, Iftach; Schmidt, Samuel J.; Stern, Daniel K.; Tyson, J. Anthony; Von der Linden, Anja; Wechsler, Risa H.; Wood-Vasey, W. M.; Zentner, A.

    2015-01-01

    Ongoing and near-future imaging-based dark energy experiments are critically dependent upon photometric redshifts (a.k.a. photo-z's): i.e., estimates of the redshifts of objects based only on flux information obtained through broad filters. Higher-quality, lower-scatter photo-z's will result in smaller random errors on cosmological parameters; while systematic errors in photometric redshift estimates, if not constrained, may dominate all other uncertainties from these experiments. The desired optimization and calibration is dependent upon spectroscopic measurements for secure redshift information; this is the key application of galaxy spectroscopy for imaging-based dark energy experiments. Hence, to achieve their full potential, imaging-based experiments will require large sets of objects with spectroscopically-determined redshifts, for two purposes: Training: Objects with known redshift are needed to map out the relationship between object color and z (or, equivalently, to determine empirically-calibrated templates describing the rest-frame spectra of the full range of galaxies, which may be used to predict the color-z relation). The ultimate goal of training is to minimize each moment of the distribution of differences between photometric redshift estimates and the true redshifts of objects, making the relationship between them as tight as possible. The larger and more complete our ''training set'' of spectroscopic redshifts is, the smaller the RMS photo-z errors should be, increasing the constraining power of imaging experiments; Requirements: Spectroscopic redshift measurements for ∼30,000 objects over >∼15 widely-separated regions, each at least ∼20 arcmin in diameter, and reaching the faintest objects used in a given experiment, will likely be necessary if photometric redshifts are to be trained and calibrated with conventional techniques. Larger, more complete samples (i.e., with longer exposure times) can improve photo

  17. Image processing analysis of traditional Gestalt vision experiments

    Science.gov (United States)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  18. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  19. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    Science.gov (United States)

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  20. Beyond where it started: a look at the "Healing Images" experience.

    Science.gov (United States)

    Goodsmith, Lauren

    2007-01-01

    In March 2004, the Baltimore-based nonprofit organization Advocates for Survivors of Torture and Trauma (ASTT) initiated a photography-based therapeutic programme for clients. Developed by a professional photographer/teacher in collaboration with a psychologist, the programme has the goal of enabling clients to engage in creative self-exploration within a supportive, group setting. Since its inception, thirty survivors of conflict-related trauma and torture from five different countries have taken part in the programme, known as "Healing Images", using digital cameras to gather individually-chosen images that are subsequently shared and discussed within the group. These images include depictions of the natural and manmade environments in which clients find themselves; people, places and objects that offer comfort; and self-portraits that reflect the reality of the life of a refugee in the United States. This description of the "Healing Images" programme is based on comments gathered through discussion with participants and through interviews. Additional information was gathered from observation of early workshop sessions, review of numerous client photographs and captions, and pertinent organizational materials. A fundamental benefit of the programme was that it offered a mutually supportive group environment that diminished clients' feelings of psychological and physical isolation. Participants gained deep satisfaction from learning the technical skills related to use of the cameras, from the empowering experience of framing and creating specific images, and from exploring the personal significance of these images. Programme activities sparked a process of self-expression that participants valued on the level of personal discovery and growth. Some clients also welcomed opportunities to share their work publicly, as a means of raising awareness of the experience of survivors.

  1. Multi-Label Classification Based on Low Rank Representation for Image Annotation

    Directory of Open Access Journals (Sweden)

    Qiaoyu Tan

    2017-01-01

    Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.

  2. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    Science.gov (United States)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  3. Monotonicity-based electrical impedance tomography for lung imaging

    Science.gov (United States)

    Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun

    2018-04-01

    This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.

  4. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  5. Measurable realistic image-based 3D mapping

    Science.gov (United States)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  6. Image annotation based on positive-negative instances learning

    Science.gov (United States)

    Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.

  7. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    Science.gov (United States)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  8. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  9. Subspace-Based Holistic Registration for Low-Resolution Facial Images

    Directory of Open Access Journals (Sweden)

    Boom BJ

    2010-01-01

    Full Text Available Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration.

  10. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  11. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  12. The impact of brand image on customer experience – Company X

    OpenAIRE

    Siitonen, Hannes

    2017-01-01

    The aim of this thesis was to find out what kind of relationship there is between brand image and customer experience, and how the brand image affects to customer experience. The aim was also to define the company’s brand image and customer experience among the target groups, and what factors do affect to them. In addition, this thesis aimed to produce valuable information for the company about their brand image, customer experience, customer behaviour and customer satisfaction, followed by i...

  13. Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images. A study based on phantom experiments and clinical images

    International Nuclear Information System (INIS)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho

    2014-01-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV. (author)

  14. [Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images: a study based on phantom experiments and clinical images].

    Science.gov (United States)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho; Ito, Shigeru; Sano, Yoshitaka; Sato, Mayumi; Kanno, Toshihiko; Okada, Hiroyuki; Torizuka, Tatsuo; Nishizawa, Sadahiko

    2014-06-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV.

  15. Sociocultural experiences, body image, and indoor tanning among young adult women.

    Science.gov (United States)

    Stapleton, Jerod L; Manne, Sharon L; Greene, Kathryn; Darabos, Katie; Carpenter, Amanda; Hudson, Shawna V; Coups, Elliot J

    2017-10-01

    The purpose of this survey study was to evaluate a model of body image influences on indoor tanning behavior. Participants were 823 young adult women recruited from a probability-based web panel in the United States. Consistent with our hypothesized model, tanning-related sociocultural experiences were indirectly associated with lifetime indoor tanning use and intentions to tan as mediated through tan surveillance and tan dissatisfaction. Findings suggest the need for targeting body image constructs as mechanisms of behavior change in indoor tanning behavioral interventions.

  16. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Science.gov (United States)

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  17. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Directory of Open Access Journals (Sweden)

    Liyun Zhuang

    2017-01-01

    Full Text Available This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE, which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE. Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  18. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance.

    Science.gov (United States)

    Zhuang, Liyun; Guan, Yepeng

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  19. An Image Matching Method Based on Fourier and LOG-Polar Transform

    Directory of Open Access Journals (Sweden)

    Zhijia Zhang

    2014-04-01

    Full Text Available This Traditional template matching methods are not appropriate for the situation of large angle rotation between two images in the online detection for industrial production. Aiming at this problem, Fourier transform algorithm was introduced to correct image rotation angle based on its rotatary invariance in time-frequency domain, orienting image under test in the same direction with reference image, and then match these images using matching algorithm based on log-polar transform. Compared with the current matching algorithms, experimental results show that the proposed algorithm can not only match two images with rotation of arbitrary angle, but also possess a high matching accuracy and applicability. In addition, the validity and reliability of algorithm was verified by simulated matching experiment targeting circular images.

  20. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip.

    Science.gov (United States)

    Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun

    2017-09-14

    Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.

  1. A factorial experiment on image quality and radiation dose

    International Nuclear Information System (INIS)

    Norrman, E.; Persliden, J.

    2005-01-01

    To find if factorial experiments can be used in the optimisation of diagnostic imaging, a factorial experiment was performed to investigate some of the factors that influence image quality, kerma area product (KAP) and effective dose (E). In a factorial experiment the factors are varied together instead of one at a time, making it possible to discover interactions between the factors as well as major effects. The factors studied were tube potential, tube loading, focus size and filtration. Each factor was set to two levels (low and high). The influence of the factors on the response variables (image quality, KAP and E) was studied using a direct digital detector. The major effects of each factor on the response variables were estimated as well as the interaction effects between factors. The image quality, KAP and E were mainly influenced by tube loading, tube potential and filtration. There were some active interactions, for example, between tube potential and filtration and between tube loading and filtration. The study shows that factorial experiments can be used to predict the influence of various parameters on image quality and radiation dose. (authors)

  2. Multi-band Image Registration Method Based on Fourier Transform

    Institute of Scientific and Technical Information of China (English)

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  3. Image Quality Assessment of High-Resolution Satellite Images with Mtf-Based Fuzzy Comprehensive Evaluation Method

    Science.gov (United States)

    Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.

    2018-04-01

    A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.

  4. The Galileo Solid-State Imaging experiment

    Science.gov (United States)

    Belton, M.J.S.; Klaasen, K.P.; Clary, M.C.; Anderson, J.L.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Anderson, D.; Bolef, L.K.; Townsend, T.E.; Greenberg, R.; Head, J. W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Gierasch, P.J.; Fanale, F.P.; Ingersoll, A.P.; Masursky, H.; Morrison, D.; Pollack, James B.

    1992-01-01

    The Solid State Imaging (SSI) experiment on the Galileo Orbiter spacecraft utilizes a high-resolution (1500 mm focal length) television camera with an 800 ?? 800 pixel virtual-phase, charge-coupled detector. It is designed to return images of Jupiter and its satellites that are characterized by a combination of sensitivity levels, spatial resolution, geometric fiedelity, and spectral range unmatched by imaging data obtained previously. The spectral range extends from approximately 375 to 1100 nm and only in the near ultra-violet region (??? 350 nm) is the spectral coverage reduced from previous missions. The camera is approximately 100 times more sensitive than those used in the Voyager mission, and, because of the nature of the satellite encounters, will produce images with approximately 100 times the ground resolution (i.e., ??? 50 m lp-1) on the Galilean satellites. We describe aspects of the detector including its sensitivity to energetic particle radiation and how the requirements for a large full-well capacity and long-term stability in operating voltages led to the choice of the virtual phase chip. The F/8.5 camera system can reach point sources of V(mag) ??? 11 with S/N ??? 10 and extended sources with surface brightness as low as 20 kR in its highest gain state and longest exposure mode. We describe the performance of the system as determined by ground calibration and the improvements that have been made to the telescope (same basic catadioptric design that was used in Mariner 10 and the Voyager high-resolution cameras) to reduce the scattered light reaching the detector. The images are linearly digitized 8-bits deep and, after flat-fielding, are cosmetically clean. Information 'preserving' and 'non-preserving' on-board data compression capabilities are outlined. A special "summation" mode, designed for use deep in the Jovian radiation belts, near Io, is also described. The detector is 'preflashed' before each exposure to ensure the photometric linearity

  5. An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework

    Directory of Open Access Journals (Sweden)

    Guanqiu Qi

    2017-10-01

    Full Text Available Image fusion is widely used in different areas and can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. Medical image fusion, as an important image fusion application, can extract the details of multiple images from different imaging modalities and combine them into an image that contains complete and non-redundant information for increasing the accuracy of medical diagnosis and assessment. The quality of the fused image directly affects medical diagnosis and assessment. However, existing solutions have some drawbacks in contrast, sharpness, brightness, blur and details. This paper proposes an integrated dictionary-learning and entropy-based medical image-fusion framework that consists of three steps. First, the input image information is decomposed into low-frequency and high-frequency components by using a Gaussian filter. Second, low-frequency components are fused by weighted average algorithm and high-frequency components are fused by the dictionary-learning based algorithm. In the dictionary-learning process of high-frequency components, an entropy-based algorithm is used for informative blocks selection. Third, the fused low-frequency and high-frequency components are combined to obtain the final fusion results. The results and analyses of comparative experiments demonstrate that the proposed medical image fusion framework has better performance than existing solutions.

  6. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  7. Some practical considerations in finite element-based digital image correlation

    KAUST Repository

    Wang, Bo

    2015-04-20

    As an alternative to subset-based digital image correlation (DIC), finite element-based (FE-based) DIC method has gained increasing attention in the experimental mechanics community. However, the literature survey reveals that some important issues have not been well addressed in the published literatures. This work therefore aims to point out a few important considerations in the practical algorithm implementation of the FE-based DIC method, along with simple but effective solutions that can effectively tackle these issues. First, to better accommodate the possible intensity variations of the deformed images practically occurred in real experiments, a robust zero-mean normalized sum of squared difference criterion, instead of the commonly used sum of squared difference criterion, is introduced to quantify the similarity between reference and deformed elements in FE-based DIC. Second, to reduce the bias error induced by image noise and imperfect intensity interpolation, low-pass filtering of the speckle images with a 5×5 pixels Gaussian filter prior to correlation analysis, is presented. Third, to ensure the iterative calculation of FE-based DIC converges correctly and rapidly, an efficient subset-based DIC method, instead of simple integer-pixel displacement searching, is used to provide accurate initial guess of deformation for each calculation point. Also, the effects of various convergence criteria on the efficiency and accuracy of FE-based DIC are carefully examined, and a proper convergence criterion is recommended. The efficacy of these solutions is verified by numerical and real experiments. The results reveal that the improved FE-based DIC offers evident advantages over existing FE-based DIC method in terms of accuracy and efficiency. © 2015 Elsevier Ltd. All rights reserved.

  8. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  9. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  10. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  11. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  12. Toward High Altitude Airship Ground-Based Boresight Calibration of Hyperspectral Pushbroom Imaging Sensors

    Directory of Open Access Journals (Sweden)

    Aiwu Zhang

    2015-12-01

    Full Text Available The complexity of the single linear hyperspectral pushbroom imaging based on a high altitude airship (HAA without a three-axis stabilized platform is much more than that based on the spaceborne and airborne. Due to the effects of air pressure, temperature and airflow, the large pitch and roll angles tend to appear frequently that create pushbroom images highly characterized with severe geometric distortions. Thus, the in-flight calibration procedure is not appropriate to apply to the single linear pushbroom sensors on HAA having no three-axis stabilized platform. In order to address this problem, a new ground-based boresight calibration method is proposed. Firstly, a coordinate’s transformation model is developed for direct georeferencing (DG of the linear imaging sensor, and then the linear error equation is derived from it by using the Taylor expansion formula. Secondly, the boresight misalignments are worked out by using iterative least squares method with few ground control points (GCPs and ground-based side-scanning experiments. The proposed method is demonstrated by three sets of experiments: (i the stability and reliability of the method is verified through simulation-based experiments; (ii the boresight calibration is performed using ground-based experiments; and (iii the validation is done by applying on the orthorectification of the real hyperspectral pushbroom images from a HAA Earth observation payload system developed by our research team—“LanTianHao”. The test results show that the proposed boresight calibration approach significantly improves the quality of georeferencing by reducing the geometric distortions caused by boresight misalignments to the minimum level.

  13. The SPQR experiment: detecting damage to orbiting spacecraft with ground-based telescopes

    Science.gov (United States)

    Paolozzi, Antonio; Porfilio, Manfredi; Currie, Douglas G.; Dantowitz, Ronald F.

    2007-09-01

    The objective of the Specular Point-like Quick Reference (SPQR) experiment was to evaluate the possibility of improving the resolution of ground-based telescopic imaging of manned spacecraft in orbit. The concept was to reduce image distortions due to atmospheric turbulence by evaluating the Point Spread Function (PSF) of a point-like light reference and processing the spacecraft image accordingly. The target spacecraft was the International Space Station (ISS) and the point-like reference was provided by a laser beam emitted by the ground station and reflected back to the telescope by a Cube Corner Reflector (CCR) mounted on an ISS window. The ultimate objective of the experiment was to demonstrate that it is possible to image spacecraft in Low Earth Orbit (LEO) with a resolution of 20 cm, which would have probably been sufficient to detect the damage which caused the Columbia disaster. The experiment was successfully performed from March to May 2005. The paper provides an overview of the SPQR experiment.

  14. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics.

    Science.gov (United States)

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.

  15. Eye gazing direction inspection based on image processing technique

    Science.gov (United States)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  16. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    Science.gov (United States)

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  17. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X; Chang, J [NY Weill Cornell Medical Ctr, NY (United States)

    2014-06-01

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thus the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.

  18. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  19. Image standards in Tissue-Based Diagnosis (Diagnostic Surgical Pathology

    Directory of Open Access Journals (Sweden)

    Vollmer Ekkehard

    2008-04-01

    Full Text Available Abstract Background Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. Aims To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. Theory and experiences Images used in tissue-based diagnosis present with pathology – specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease – image combination, human – diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image

  20. Chemistry Graduate Teaching Assistants' Experiences in Academic Laboratories and Development of a Teaching Self-image

    Science.gov (United States)

    Gatlin, Todd Adam

    Graduate teaching assistants (GTAs) play a prominent role in chemistry laboratory instruction at research based universities. They teach almost all undergraduate chemistry laboratory courses. However, their role in laboratory instruction has often been overlooked in educational research. Interest in chemistry GTAs has been placed on training and their perceived expectations, but less attention has been paid to their experiences or their potential benefits from teaching. This work was designed to investigate GTAs' experiences in and benefits from laboratory instructional environments. This dissertation includes three related studies on GTAs' experiences teaching in general chemistry laboratories. Qualitative methods were used for each study. First, phenomenological analysis was used to explore GTAs' experiences in an expository laboratory program. Post-teaching interviews were the primary data source. GTAs experiences were described in three dimensions: doing, knowing, and transferring. Gains available to GTAs revolved around general teaching skills. However, no gains specifically related to scientific development were found in this laboratory format. Case-study methods were used to explore and illustrate ways GTAs develop a GTA self-image---the way they see themselves as instructors. Two general chemistry laboratory programs that represent two very different instructional frameworks were chosen for the context of this study. The first program used a cooperative project-based approach. The second program used weekly, verification-type activities. End of the semester interviews were collected and served as the primary data source. A follow-up case study of a new cohort of GTAs in the cooperative problem-based laboratory was undertaken to investigate changes in GTAs' self-images over the course of one semester. Pre-semester and post-semester interviews served as the primary data source. Findings suggest that GTAs' construction of their self-image is shaped through the

  1. A simple method for detecting tumor in T2-weighted MRI brain images. An image-based analysis

    International Nuclear Information System (INIS)

    Lau, Phooi-Yee; Ozawa, Shinji

    2006-01-01

    The objective of this paper is to present a decision support system which uses a computer-based procedure to detect tumor blocks or lesions in digitized medical images. The authors developed a simple method with a low computation effort to detect tumors on T2-weighted Magnetic Resonance Imaging (MRI) brain images, focusing on the connection between the spatial pixel value and tumor properties from four different perspectives: cases having minuscule differences between two images using a fixed block-based method, tumor shape and size using the edge and binary images, tumor properties based on texture values using spatial pixel intensity distribution controlled by a global discriminate value, and the occurrence of content-specific tumor pixel for threshold images. Measurements of the following medical datasets were performed: different time interval images, and different brain disease images on single and multiple slice images. Experimental results have revealed that our proposed technique incurred an overall error smaller than those in other proposed methods. In particular, the proposed method allowed decrements of false alarm and missed alarm errors, which demonstrate the effectiveness of our proposed technique. In this paper, we also present a prototype system, known as PCB, to evaluate the performance of the proposed methods by actual experiments, comparing the detection accuracy and system performance. (author)

  2. A high-throughput surface plasmon resonance biosensor based on differential interferometric imaging

    International Nuclear Information System (INIS)

    Wang, Daqian; Ding, Lili; Zhang, Wei; Zhang, Enyao; Yu, Xinglong; Luo, Zhaofeng; Ou, Huichao

    2012-01-01

    A new high-throughput surface plasmon resonance (SPR) biosensor based on differential interferometric imaging is reported. The two SPR interferograms of the sensing surface are imaged on two CCD cameras. The phase difference between the two interferograms is 180°. The refractive index related factor (RIRF) of the sensing surface is calculated from the two simultaneously acquired interferograms. The simulation results indicate that the RIRF exhibits a linear relationship with the refractive index of the sensing surface and is unaffected by the noise, drift and intensity distribution of the light source. The affinity and kinetic information can be extracted in real time from continuously acquired RIRF distributions. The results of refractometry experiments show that the dynamic detection range of SPR differential interferometric imaging system can be over 0.015 refractive index unit (RIU). High refractive index resolution is down to 0.45 RU (1 RU = 1 × 10 −6 RIU). Imaging and protein microarray experiments demonstrate the ability of high-throughput detection. The aptamer experiments demonstrate that the SPR sensor based on differential interferometric imaging has a great capability to be implemented for high-throughput aptamer kinetic evaluation. These results suggest that this biosensor has the potential to be utilized in proteomics and drug discovery after further improvement. (paper)

  3. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan; Zhang, Guohui; Li, Weizhi; Gu, Yi; Liang, Ru-Ze; Liang, Gaoyuan; Wang, Jingbin; Wu, Yanbin; Patil, Nitin; Wang, Jing-Yan

    2017-01-01

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  4. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan

    2017-10-24

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  5. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  6. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  7. Drug release into hydrogel-based subcutaneous surrogates studied by UV imaging

    DEFF Research Database (Denmark)

    Ye, Fengbin; Larsen, Susan Weng; Yaghmur, Anan

    2012-01-01

    of the performance of drug delivery systems based on in vitro experiments. The objective of this study was to evaluate a UV imaging-based method for real-time characterization of the release and transport of piroxicam in hydrogel-based subcutaneous tissue mimics/surrogates. Piroxicam partitioning from medium chain...... upon the injection of aqueous or MCT solutions into an agarose-based hydrogel were investigated by UV imaging. The spatial distribution of piroxicam around the injection site in the gel matrix was monitored in real-time. The disappearance profiles of piroxicam from the injected aqueous solution were...... obtained. This study shows that the UV imaging methodology has considerable potential for characterizing transport properties in hydrogels, including monitoring the real-time spatial concentration distribution in vitro after administration by injection....

  8. ViCAR: An Adaptive and Landmark-Free Registration of Time Lapse Image Data from Microfluidics Experiments

    Directory of Open Access Journals (Sweden)

    Georges Hattab

    2017-05-01

    Full Text Available In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate. In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking. Image alignment faces two obstacles in this microscopic context: (a highly dynamic structural changes in the sample (i.e., colony growth and (b an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Visual (Cues based (Adaptive (Registration, for such microfluidics experiments, consisting of (1 the detection of particular polygons (outlined and segmented ones, referred to as visual cues, (2 the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3 an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10−2 pixels, and superior results compared to a state of the art algorithm.

  9. Remote diagnosis via a telecommunication satellite--ultrasonic tomographic image transmission experiments.

    Science.gov (United States)

    Nakajima, I; Inokuchi, S; Tajima, T; Takahashi, T

    1985-04-01

    An experiment to transmit ultrasonic tomographic section images required for remote medical diagnosis and care was conducted using the mobile telecommunication satellite OSCAR-10. The images received showed the intestinal condition of a patient incapable of verbal communication, however the image screen had a fairly coarse particle structure. On the basis of these experiments, were considered as the transmission of ultrasonic tomographic images extremely effective in remote diagnosis.

  10. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    Science.gov (United States)

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  11. The Track Imaging Cerenkov Experiment

    Science.gov (United States)

    Wissel, S. A.; Byrum, K.; Cunningham, J. D.; Drake, G.; Hays, E.; Horan, D.; Kieda, D.; Kovacs, E.; Macgill, S.; Nodulman, L.; hide

    2012-01-01

    We describe a dedicated cosmic-ray telescope that explores a new method for detecting Cerenkov radiation from high-energy primary cosmic rays and the large particle air shower they induce upon entering the atmosphere. Using a camera comprising 16 multi-anode photomultiplier tubes for a total of 256 pixels, the Track Imaging Cerenkov Experiment (TrICE) resolves substructures in particle air showers with 0.086deg resolution. Cerenkov radiation is imaged using a novel two-part optical system in which a Fresnel lens provides a wide-field optical trigger and a mirror system collects delayed light with four times the magnification. TrICE records well-resolved cosmic-ray air showers at rates ranging between 0.01-0.1 Hz.

  12. The Native American Experience. American Historical Images on File.

    Science.gov (United States)

    Wardwell, Lelia, Ed.

    This photo-documentation reference body presents more than 275 images chronicling the experiences of the American Indian from their prehistoric migrations to the present. The volume includes information and images illustrating the life ways of various tribes. The images are accompanied by historical information providing cultural context. The book…

  13. A data base for reactor physics experiments at KUCA, 1

    International Nuclear Information System (INIS)

    Ichihara, Chihiro; Hayashi, Masatoshi; Fujine, Shigenori; Wakamatsu, Susumu.

    1986-01-01

    A data base of the experiment done at the Critical Assembly of Kyoto University(KUCA) was constructed both on personal computers and a main frame. A retrieval data base based on each experiment serve as the key data base. The critical experiment data, geometries of the core configuration or fuel elements, and the various numeric data are referred after the results of the retrieval. The personal computer program for this data base is made using BASIC language and the whole system consist of the retrieval data base and the graphic data. The construction of the critical experiment data is now in progress. The data base system can be supplied to the KUCA users with floppy disks. A universal information retrieval system, FAIRS is prepared at the Data Processing Center Kyoto University. By using this system, the retrieval data base of the experiment was constructed. The image information such as core configuration and fuel elements are stored by using ELF system which can be linked to the FAIRS. The data base on FAIRS can be referred from each university through an online network. However, ELF is a closed service within Kyoto University at present. (author)

  14. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  15. Enhance wound healing monitoring through a thermal imaging based smartphone app

    Science.gov (United States)

    Yi, Steven; Lu, Minta; Yee, Adam; Harmon, John; Meng, Frank; Hinduja, Saurabh

    2018-03-01

    In this paper, we present a thermal imaging based app to augment traditional appearance based wound growth monitoring. Accurate diagnose and track of wound healing enables physicians to effectively assess, document, and individualize the treatment plan given to each wound patient. Currently, wounds are primarily examined by physicians through visual appearance and wound area. However, visual information alone cannot present a complete picture on a wound's condition. In this paper, we use a smartphone attached thermal imager and evaluate its effectiveness on augmenting visual appearance based wound diagnosis. Instead of only monitoring wound temperature changes on a wound, our app presents physicians a comprehensive measurements including relative temperature, wound healing thermal index, and wound blood flow. Through the rat wound experiments and by monitoring the integrated thermal measurements over 3 weeks of time frame, our app is able to show the underlying healing process through the blood flow. The implied significance of our app design and experiment includes: (a) It is possible to use a low cost smartphone attached thermal imager for added value on wound assessment, tracking, and treatment; and (b) Thermal mobile app can be used for remote wound healing assessment for mobile health based solution.

  16. Chaotic Image Scrambling Algorithm Based on S-DES

    International Nuclear Information System (INIS)

    Yu, X Y; Zhang, J; Ren, H E; Xu, G S; Luo, X Y

    2006-01-01

    With the security requirement improvement of the image on the network, some typical image encryption methods can't meet the demands of encryption, such as Arnold cat map and Hilbert transformation. S-DES system can encrypt the input binary flow of image, but the fixed system structure and few keys will still bring some risks. However, the sensitivity of initial value that Logistic chaotic map can be well applied to the system of S-DES, which makes S-DES have larger random and key quantities. A dual image encryption algorithm based on S-DES and Logistic map is proposed. Through Matlab simulation experiments, the key quantities will attain 10 17 and the encryption speed of one image doesn't exceed one second. Compared to traditional methods, it has some merits such as easy to understand, rapid encryption speed, large keys and sensitivity to initial value

  17. Chaotic Image Encryption Algorithm Based on Circulant Operation

    Directory of Open Access Journals (Sweden)

    Xiaoling Huang

    2013-01-01

    Full Text Available A novel chaotic image encryption scheme based on the time-delay Lorenz system is presented in this paper with the description of Circulant matrix. Making use of the chaotic sequence generated by the time-delay Lorenz system, the pixel permutation is carried out in diagonal and antidiagonal directions according to the first and second components. Then, a pseudorandom chaotic sequence is generated again from time-delay Lorenz system using all components. Modular operation is further employed for diffusion by blocks, in which the control parameter is generated depending on the plain-image. Numerical experiments show that the proposed scheme possesses the properties of a large key space to resist brute-force attack, sensitive dependence on secret keys, uniform distribution of gray values in the cipher-image, and zero correlation between two adjacent cipher-image pixels. Therefore, it can be adopted as an effective and fast image encryption algorithm.

  18. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  19. Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number

    OpenAIRE

    Kohei Arai; Yuji Yamada

    2011-01-01

    An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...

  20. A Novel Image Encryption Algorithm Based on DNA Encoding and Spatiotemporal Chaos

    Directory of Open Access Journals (Sweden)

    Chunyan Song

    2015-10-01

    Full Text Available DNA computing based image encryption is a new, promising field. In this paper, we propose a novel image encryption scheme based on DNA encoding and spatiotemporal chaos. In particular, after the plain image is primarily diffused with the bitwise Exclusive-OR operation, the DNA mapping rule is introduced to encode the diffused image. In order to enhance the encryption, the spatiotemporal chaotic system is used to confuse the rows and columns of the DNA encoded image. The experiments demonstrate that the proposed encryption algorithm is of high key sensitivity and large key space, and it can resist brute-force attack, entropy attack, differential attack, chosen-plaintext attack, known-plaintext attack and statistical attack.

  1. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    Science.gov (United States)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  2. Incident Light Frequency-Based Image Defogging Algorithm

    Directory of Open Access Journals (Sweden)

    Wenbo Zhang

    2017-01-01

    Full Text Available To solve the color distortion problem produced by the dark channel prior algorithm, an improved method for calculating transmittance of all channels, respectively, was proposed in this paper. Based on the Beer-Lambert Law, the influence between the frequency of the incident light and the transmittance was analyzed, and the ratios between each channel’s transmittance were derived. Then, in order to increase efficiency, the input image was resized to a smaller size before acquiring the refined transmittance which will be resized to the same size of original image. Finally, all the transmittances were obtained with the help of the proportion between each color channel, and then they were used to restore the defogging image. Experiments suggest that the improved algorithm can produce a much more natural result image in comparison with original algorithm, which means the problem of high color saturation was eliminated. What is more, the improved algorithm speeds up by four to nine times compared to the original algorithm.

  3. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  4. A SPIRAL-BASED DOWNSCALING METHOD FOR GENERATING 30 M TIME SERIES IMAGE DATA

    Directory of Open Access Journals (Sweden)

    B. Liu

    2017-09-01

    high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.

  5. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  6. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  7. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    Science.gov (United States)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data

  8. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  9. Simulation of single grid-based phase-contrast x-ray imaging (g-PCXI)

    Energy Technology Data Exchange (ETDEWEB)

    Lim, H.W.; Lee, H.W. [Department of Radiation Convergence Engineering, iTOMO Group, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 26493 (Korea, Republic of); Cho, H.S., E-mail: hscho1@yonsei.ac.kr [Department of Radiation Convergence Engineering, iTOMO Group, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 26493 (Korea, Republic of); Je, U.K.; Park, C.K.; Kim, K.S.; Kim, G.A.; Park, S.Y.; Lee, D.Y.; Park, Y.O.; Woo, T.H. [Department of Radiation Convergence Engineering, iTOMO Group, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 26493 (Korea, Republic of); Lee, S.H.; Chung, W.H.; Kim, J.W.; Kim, J.G. [R& D Center, JPI Healthcare Co., Ltd., Ansan 425-833 (Korea, Republic of)

    2017-04-01

    Single grid-based phase-contrast x-ray imaging (g-PCXI) technique, which was recently proposed by Wen et al. to retrieve absorption, scattering, and phase-gradient images from the raw image of the examined object, seems a practical method for phase-contrast imaging with great simplicity and minimal requirements on the setup alignment. In this work, we developed a useful simulation platform for g-PCXI and performed a simulation to demonstrate its viability. We also established a table-top setup for g-PCXI which consists of a focused-linear grid (200-lines/in strip density), an x-ray tube (100-μm focal spot size), and a flat-panel detector (48-μm pixel size) and performed a preliminary experiment with some samples to show the performance of the simulation platform. We successfully obtained phase-contrast x-ray images of much enhanced contrast from both the simulation and experiment and the simulated contract seemed similar to the experimental contrast, which shows the performance of the developed simulation platform. We expect that the simulation platform will be useful for designing an optimal g-PCXI system. - Highlights: • It is proposed for the single grid-based phase-contrast x-ray imaging (g-PCXI) technique. • We implemented for a numerical simulation code. • The preliminary experiment with several samples to compare is performed. • It is expected to be useful to design an optimal g-PCXI system.

  10. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing; Wang, B.; Lubineau, Gilles; Moussawi, Ali

    2015-01-01

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  12. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing

    2015-02-12

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  13. Self-training-based spectral image reconstruction for art paintings with multispectral imaging.

    Science.gov (United States)

    Xu, Peng; Xu, Haisong; Diao, Changyu; Ye, Zhengnan

    2017-10-20

    A self-training-based spectral reflectance recovery method was developed to accurately reconstruct the spectral images of art paintings with multispectral imaging. By partitioning the multispectral images with the k-means clustering algorithm, the training samples are directly extracted from the art painting itself to restrain the deterioration of spectral estimation caused by the material inconsistency between the training samples and the art painting. Coordinate paper is used to locate the extracted training samples. The spectral reflectances of the extracted training samples are acquired indirectly with a spectroradiometer, and the circle Hough transform is adopted to detect the circle measuring area of the spectroradiometer. Through simulation and a practical experiment, the implementation of the proposed method is explained in detail, and it is verified to have better reflectance recovery performance than that using the commercial target and is comparable to the approach using a painted color target.

  14. A model-based radiography restoration method based on simple scatter-degradation scheme for improving image visibility

    Science.gov (United States)

    Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.

    2018-02-01

    In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.

  15. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  16. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  17. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  18. CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.

    Science.gov (United States)

    Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun

    2014-11-01

    A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.

  19. Advanced image based methods for structural integrity monitoring: Review and prospects

    Science.gov (United States)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

  20. Evidence-based cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Shinagare, Atul B.; Khorasani, Ramin [Dept. of Radiology, Brigham and Women' s Hospital, Boston (Korea, Republic of)

    2017-01-15

    With the advances in the field of oncology, imaging is increasingly used in the follow-up of cancer patients, leading to concerns about over-utilization. Therefore, it has become imperative to make imaging more evidence-based, efficient, cost-effective and equitable. This review explores the strategies and tools to make diagnostic imaging more evidence-based, mainly in the context of follow-up of cancer patients.

  1. PCANet-Based Structural Representation for Nonrigid Multimodal Medical Image Registration

    Directory of Open Access Journals (Sweden)

    Xingxing Zhu

    2018-05-01

    Full Text Available Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND, normalized mutual information (NMI, Weber local descriptor (WLD, and the sum of squared differences on entropy images (ESSD, the proposed method provides better registration performance in terms of target registration error (TRE and subjective human vision.

  2. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    Directory of Open Access Journals (Sweden)

    Haoting Liu

    2017-02-01

    Full Text Available An imaging sensor-based intelligent Light Emitting Diode (LED lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  3. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  4. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  5. Combining the Pixel-based and Object-based Methods for Building Change Detection Using High-resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    ZHANG Zhiqiang

    2018-01-01

    Full Text Available Timely and accurate change detection of buildings provides important information for urban planning and management.Accompanying with the rapid development of satellite remote sensing technology,detecting building changes from high-resolution remote sensing images have received wide attention.Given that pixel-based methods of change detection often lead to low accuracy while object-based methods are complicated for uses,this research proposes a method that combines pixel-based and object-based methods for detecting building changes from high-resolution remote sensing images.First,based on the multiple features extracted from the high-resolution images,a random forest classifier is applied to detect changed building at the pixel level.Then,a segmentation method is applied to segement the post-phase remote sensing image and to get post-phase image objects.Finally,both changed building at the pixel level and post-phase image objects are fused to recognize the changed building objects.Multi-temporal QuickBird images are used as experiment data for building change detection with high-resolution remote sensing images,the results indicate that the proposed method could reduce the influence of environmental difference,such as light intensity and view angle,on building change detection,and effectively improve the accuracies of building change detection.

  6. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  7. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    Science.gov (United States)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  8. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  9. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  10. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  11. New segmentation-based tone mapping algorithm for high dynamic range image

    Science.gov (United States)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  12. Mechanical Damage Detection of Indonesia Local Citrus Based on Fluorescence Imaging

    Science.gov (United States)

    Siregar, T. H.; Ahmad, U.; Sutrisno; Maddu, A.

    2018-05-01

    Citrus experienced physical damage in peel will produce essential oils that contain polymethoxylated flavone. Polymethoxylated flavone is fluorescence substance; thus can be detected by fluorescence imaging. This study aims to study the fluorescence spectra characteristic and to determine the damage region in citrus peel based on fluorescence image. Pulung citrus from Batu district, East Java, as a famous citrus production area in Indonesia, was used in the experiment. It was observed that the image processing could detect the mechanical damage region. Fluorescence imaging can be used to classify the citrus into two categories, sound and defect citruses.

  13. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  14. iScreen: Image-Based High-Content RNAi Screening Analysis Tools.

    Science.gov (United States)

    Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua

    2015-09-01

    High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.

  15. The Role of Consumer Experiences in Building the image of brands: A Study in Airlines

    Directory of Open Access Journals (Sweden)

    Ana Iris Tomás Vasconcelos

    2015-04-01

    Full Text Available Studies on brand and consumer experience gained emphasis from the twentieth century, however the relationship between these themes still has gaps. Therefore, this study examines the role of consumer experiences in building the brand image through the identification of thoughts, feelings and actions arising from consumer experiences with airlines, and the types of associations that the consumer makes such marks. Therefore, a variation of qualitative critical incident technique was used, considering those remembered experiences that have excelled in consumer perception, interviewing ten users of air services, based on a two parts semi-structured form: description of experiences with airlines and information about the image of the brands of airlines. The analyzed data have revealed that thoughts, feelings and actions arising from consumer experiences become important elements in shaping the perception of brands of airlines. Through the consumption experience, consumers mainly use the service attributes to build their perception of the marks of the airlines. These attributes are used either directly as to support other types of associations such as those related to company size.

  16. The impact of brand experience on attitudes and brand image : A quantitative study

    OpenAIRE

    Isotalo, Anni; Watanen, Samu

    2015-01-01

    Research questions: How to create an engaging brand experience in marketing context? How does an engaging brand experience affect consumer attitudes and brand image? Purpose of the study: The authors propose that the relationship between brand experience and formation of brand loyalty can be mediated by brand affect: positive attitude and brand image. The study discovers the components of an engaging brand experience and indicates their effect on consumer attitudes and brand image. Conclusion...

  17. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    Science.gov (United States)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  18. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer.

    Science.gov (United States)

    Gutman, David A; Dunn, William D; Cobb, Jake; Stoner, Richard M; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.

  19. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Directory of Open Access Journals (Sweden)

    Shaoming Pan

    Full Text Available Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  20. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Science.gov (United States)

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  1. Point cloud-based survey for cultural heritage – An experience of integrated use of range-based and image-based technology for the San Francesco convent in Monterubbiano

    Directory of Open Access Journals (Sweden)

    A. Meschini

    2014-06-01

    Full Text Available The paper aims at presenting some results of a point cloud-based survey carried out through integrated methodologies based on active and passive 3D acquisition techniques for processing 3D models. This experiment is part of a research project still in progress conducted by an interdisciplinary team from the School of Architecture and Design of Ascoli Piceno and funded by the University of Camerino. We describe an experimentation conducted on the convent of San Francesco located in Monterubbiano town center (Marche, Italy. The whole complex has undergone a number of substantial changes since the year of its foundation in 1247. The survey was based on an approach blending range-based 3D data acquired by a TOF laser scanner and image-based 3D acquired using an UAV equipped with digital camera in order to survey some external parts difficult to reach with TLS. The integration of two acquisition methods aimed to define a workflow suitable to process dense 3D models from which to generate high poly and low poly 3D models useful to describe complex architectures for different purposes such as photorealistic representations, historical documentation, risk assessment analyses based on Finite Element Methods (FEM.

  2. The CORONAS-Photon/TESIS experiment on EUV imaging spectroscopy of the Sun

    Science.gov (United States)

    Kuzin, S.; Zhitnik, I.; Bogachev, S.; Bugaenko, O.; Ignat'ev, A.; Mitrofanov, A.; Perzov, A.; Shestov, S.; Slemzin, V.; Suhodrev, N.

    The new experiment TESIS is developent for russian CORONAS-Photon mission launch is planned on the end of 2007 The experiment is aimed on the study of activity of the Sun in the phases of minimum rise and maximum of 24 th cycle of Solar activity by the method of XUV imaging spectroscopy The method is based on the registration full-Sun monochromatic images with high spatial and temporal resolution The scientific tasks of the experiment are i Investigation dynamic processes in corona flares CME etc with high spatial up to 1 and temporal up to 1 second resolution ii determination of the main plasma parameters like plasma electron and ion density and temperature differential emission measure etc iii study of the processes of appearance and development large scale long-life magnetic structures in the solar corona study of the fluency of this structures on the global activity of the corona iv study of the mechanisms of energy accumulation and release in the solar flares and mechanisms of transformation of this energy into the heating of the plasma and kinematics energy To get the information for this studies the TESIS will register full-Sun images in narrow spectral intervals and the monochromatic lines of HeII SiXI FeXXI-FeXXIII MgXII ions The instrument includes 5 independent channels 2 telescopes for 304 and 132 A wide-field 2 5 degrees coronograph 280-330A and 8 42 A spectroheliographs The detailed description of the TESIS experiment and the instrument is presented

  3. Task-based statistical image reconstruction for high-quality cone-beam CT

    Science.gov (United States)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a

  4. Unsupervised segmentation of medical image based on difference of mutual information

    Institute of Scientific and Technical Information of China (English)

    L(U) Qingwen; CHEN Wufan

    2006-01-01

    In the scope of medical image processing, segmentation is important and difficult. There are still two problems which trouble us in this field. One is how to determine the number of clusters in an image and the other is how to segment medical images containing lesions. A new segmentation method called DDC, based on difference of mutual information (dMI) and pixon, is proposed in this paper. Experiments demonstrate that dMI shows one kind of intrinsic relationship between the segmented image and the original one and so it can be used to well determine the number of clusters. Furthermore, multi-modality medical images with lesions can be automatically and successfully segmented by DDC method.

  5. An Image Encryption Algorithm Based on Balanced Pixel and Chaotic Map

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2014-01-01

    Full Text Available Image encryption technology has been applied in many fields and is becoming the main way of protecting the image information security. There are also many ways of image encryption. However, the existing encryption algorithms, in order to obtain a better effect of encryption, always need encrypting several times. There is not an effective method to decide the number of encryption times, generally determined by the human eyes. The paper proposes an image encryption algorithm based on chaos and simultaneously proposes a balanced pixel algorithm to determine the times of image encryption. Many simulation experiments have been done including encryption effect and security analysis. Experimental results show that the proposed method is feasible and effective.

  6. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  7. An improved feature extraction algorithm based on KAZE for multi-spectral image

    Science.gov (United States)

    Yang, Jianping; Li, Jun

    2018-02-01

    Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.

  8. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  9. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.

    Science.gov (United States)

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-06-06

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.

  10. Pleasant/Unpleasant Filtering for Affective Image Retrieval Based on Cross-Correlation of EEG Features

    Directory of Open Access Journals (Sweden)

    Keranmu Xielifuguli

    2014-01-01

    Full Text Available People often make decisions based on sensitivity rather than rationality. In the field of biological information processing, methods are available for analyzing biological information directly based on electroencephalogram: EEG to determine the pleasant/unpleasant reactions of users. In this study, we propose a sensitivity filtering technique for discriminating preferences (pleasant/unpleasant for images using a sensitivity image filtering system based on EEG. Using a set of images retrieved by similarity retrieval, we perform the sensitivity-based pleasant/unpleasant classification of images based on the affective features extracted from images with the maximum entropy method: MEM. In the present study, the affective features comprised cross-correlation features obtained from EEGs produced when an individual observed an image. However, it is difficult to measure the EEG when a subject visualizes an unknown image. Thus, we propose a solution where a linear regression method based on canonical correlation is used to estimate the cross-correlation features from image features. Experiments were conducted to evaluate the validity of sensitivity filtering compared with image similarity retrieval methods based on image features. We found that sensitivity filtering using color correlograms was suitable for the classification of preferred images, while sensitivity filtering using local binary patterns was suitable for the classification of unpleasant images. Moreover, sensitivity filtering using local binary patterns for unpleasant images had a 90% success rate. Thus, we conclude that the proposed method is efficient for filtering unpleasant images.

  11. GPU-based relative fuzzy connectedness image segmentation

    International Nuclear Information System (INIS)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ ∞ -based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  12. GPU-based relative fuzzy connectedness image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W. [Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States); Department of Mathematics, West Virginia University, Morgantown, West Virginia 26506 (United States) and Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  13. GPU-based relative fuzzy connectedness image segmentation.

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  14. GPU-based relative fuzzy connectedness image segmentation

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  15. [Development of RF coil of permanent magnet mini-magnetic resonance imager and mouse imaging experiments].

    Science.gov (United States)

    Hou, Shulian; Xie, Huantong; Chen, Wei; Wang, Guangxin; Zhao, Qiang; Li, Shiyu

    2014-10-01

    In the development of radio frequency (RF) coils for better quality of the mini-type permanent magnetic resonance imager for using in the small animal imaging, the solenoid RF coil has a special advantage for permanent magnetic system based on analyses of various types.of RF coils. However, it is not satisfied for imaging if the RF coils are directly used. By theoretical analyses of the magnetic field properties produced from the solenoid coil, the research direction was determined by careful studies to raise further the uniformity of the magnetic field coil, receiving coil sensitivity for signals and signal-to-noise ratio (SNR). The method had certain advantages and avoided some shortcomings of the other different coil types, such as, birdcage coil, saddle shaped coil and phased array coil by using the alloy materials (from our own patent). The RF coils were designed, developed and made for keeled applicable to permanent magnet-type magnetic resonance imager, multi-coil combination-type, single-channel overall RF receiving coil, and applied for a patent. Mounted on three instruments (25 mm aperture, with main magnetic field strength of 0.5 T or 1.5 T, and 50 mm aperture, with main magnetic field strength of 0.48 T), we performed experiments with mice, rats, and nude mice bearing tumors. The experimental results indicated that the RF receiving coil was fully applicable to the permanent magnet-type imaging system.

  16. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  17. [Digitalization, archival storage and use of image documentation in the GastroBase-II system].

    Science.gov (United States)

    Kocna, P

    1997-05-14

    "GastroBase-II" is a module of the clinical information system "KIS-ComSyD"; The main part is represented by structured data-text with an expert system including on-line image digitalization in gastroenterology (incl. endoscopic, X-ray and endosonography pictures). The hardware and software of the GastroBase are described as well as six-years experiences with application of digitalized image data. An integration of a picture into text, reports, slides for a lecture or an electronic atlas is documented with examples. Briefly are reported out experiences with graphic editors (PhotoStyler), text editor (WordPerfect) and slide preparation for lecturing with the presentation software PowerPoint. The multimedia applications on the CD-ROM illustrate a modern trend using digitalized image documentation for pregradual and postgradual education.

  18. Adaptive polarization image fusion based on regional energy dynamic weighted average

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-qiang; PAN Quan; ZHANG Hong-cai

    2005-01-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  19. Three-Dimensional Neutral Transport Simulations of Gas Puff Imaging Experiments

    International Nuclear Information System (INIS)

    Stotler, D.P.; DIppolito, D.A.; LeBlanc, B.; Maqueda, R.J.; Myra, J.R.; Sabbagh, S.A.; Zweben, S.J.

    2003-01-01

    Gas Puff Imaging (GPI) experiments are designed to isolate the structure of plasma turbulence in the plane perpendicular to the magnetic field. Three-dimensional aspects of this diagnostic technique as used on the National Spherical Torus eXperiment (NSTX) are examined via Monte Carlo neutral transport simulations. The radial width of the simulated GPI images are in rough agreement with observations. However, the simulated emission clouds are angled approximately 15 degrees with respect to the experimental images. The simulations indicate that the finite extent of the gas puff along the viewing direction does not significantly degrade the radial resolution of the diagnostic. These simulations also yield effective neutral density data that can be used in an approximate attempt to infer two-dimensional electron density and temperature profiles from the experimental images

  20. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  1. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  2. Coffee Bean Grade Determination Based on Image Parameter

    Directory of Open Access Journals (Sweden)

    F. Ferdiansjah

    2011-12-01

    Full Text Available Quality standard for coffee as an agriculture commodity in Indonesia uses defect system which is regulated in Standar Nasional Indonesia (SNI for coffee bean, No: 01-2907-1999. In the Defect System standard, coffee bean is classified into six grades, from grade I to grade VI depending on the number of defect found in the coffee bean. Accuracy of this method heavily depends on the experience and the expertise of the human operators. The objective of the research is to develop a system to determine the coffee bean grading based on SNI No: 01-2907-1999. A visual sensor, a webcam connected to a computer, was used for image acquisition of coffee bean image samples, which were placed under uniform illumination of 414.5+2.9 lux. The computer performs feature extraction from parameters of coffee bean image samples in the term of texture (energy, entropy, contrast, homogeneity and color (R mean, G mean, and B mean and determines the grade of coffee bean based on the image parameters by implementing neural network algorithm. The accuracy of system testing for the coffee beans of grade I, II, III, IVA, IVB, V, and VI have the value of 100, 80, 60, 40, 100, 40, and 100%, respectively.

  3. Clustering Batik Images using Fuzzy C-Means Algorithm Based on Log-Average Luminance

    Directory of Open Access Journals (Sweden)

    Ahmad Sanmorino

    2012-06-01

    Full Text Available Batik is a fabric or clothes that are made ​​with a special staining technique called wax-resist dyeing and is one of the cultural heritage which has high artistic value. In order to improve the efficiency and give better semantic to the image, some researchers apply clustering algorithm for managing images before they can be retrieved. Image clustering is a process of grouping images based on their similarity. In this paper we attempt to provide an alternative method of grouping batik image using fuzzy c-means (FCM algorithm based on log-average luminance of the batik. FCM clustering algorithm is an algorithm that works using fuzzy models that allow all data from all cluster members are formed with different degrees of membership between 0 and 1. Log-average luminance (LAL is the average value of the lighting in an image. We can compare different image lighting from one image to another using LAL. From the experiments that have been made, it can be concluded that fuzzy c-means algorithm can be used for batik image clustering based on log-average luminance of each image possessed.

  4. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    Science.gov (United States)

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  5. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  6. Ultrasoft x-ray imaging system for the National Spherical Torus Experiment

    Science.gov (United States)

    Stutman, D.; Finkenthal, M.; Soukhanovskii, V.; May, M. J.; Moos, H. W.; Kaita, R.

    1999-01-01

    A spectrally resolved ultrasoft x-ray imaging system, consisting of arrays of high resolution (the National Spherical Torus Experiment. Initially, three poloidal arrays of diodes filtered for C 1s-np emission will be implemented for fast tomographic imaging of the colder start-up plasmas. Later on, mirrors tuned to the C Lyα emission will be added in order to enable the arrays to "see" the periphery through the hot core and to study magnetohydrodynamic activity and impurity transport in this region. We also discuss possible core diagnostics, based on tomographic imaging of the Lyα emission from the plume of recombined, low Z impurity ions left by neutral beams or fueling pellets. The arrays can also be used for radiated power measurements and to map the distribution of high Z impurities injected for transport studies. The performance of the proposed system is illustrated with results from test channels on the CDX-U spherical torus at Princeton Plasma Physics Laboratory.

  7. EVOLUTION OF SOUTHERN AFRICAN CRATONS BASED ON SEISMIC IMAGING

    DEFF Research Database (Denmark)

    Thybo, Hans; Soliman, Mohammad Youssof Ahmad; Artemieva, Irina

    2014-01-01

    present a new seismic model for the structure of the crust and lithospheric mantle of the Kalahari Craton, constrained by seismic receiver functions and finite-frequency tomography based on the seismological data from the South Africa Seismic Experiment (SASE). The combination of these two methods...... since formation of the craton, and (3) seismically fast lithospheric keels are imaged in the Kaapvaal and Zimabwe cratons to depths of 300-350 km. Relatively low velocity anomalies are imaged beneath both the paleo-orogenic Limpopo Belt and the Bushveld Complex down to depths of ~250 km and ~150 km...

  8. Color Image Encryption Algorithm Based on TD-ERCS System and Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Kun Zhang

    2015-01-01

    Full Text Available In order to solve the security problem of transmission image across public networks, a new image encryption algorithm based on TD-ERCS system and wavelet neural network is proposed in this paper. According to the permutation process and the binary XOR operation from the chaotic series by producing TD-ERCS system and wavelet neural network, it can achieve image encryption. This encryption algorithm is a reversible algorithm, and it can achieve original image in the rule inverse process of encryption algorithm. Finally, through computer simulation, the experiment results show that the new chaotic encryption algorithm based on TD-ERCS system and wavelet neural network is valid and has higher security.

  9. MR imaging of prostate. Preliminary experience with calculated imaging in 28 cases

    International Nuclear Information System (INIS)

    Gevenois, P.A.; Van Regemorter, G.; Ghysels, M.; Delepaut, A.; Van Gansbeke, D.; Struyven, J.

    1988-01-01

    The majority of studies with MR imaging in prostate disease are based on a semiology obtained using images weighted in T1 and T2. A study was carried out to evaluate effects of images calculated in T1 and T2 obtained at 0.5T. This preliminary study concerns 28 prostate examinations with spin-echo acquisition and inversion-recuperation parameters, and provided images calculated in T1, weighted and calculated in T2. Images allowed detection and characterization of prostate lesions. However, although calculated images accentuate discrimination of the method, the weighted images conserve their place because of their improved spatial resolution [fr

  10. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  11. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  12. PET CT imaging: the Philippine experience

    International Nuclear Information System (INIS)

    Santiago, Jonas Y.

    2011-01-01

    Currently, the most discussed fusion imaging is PET CT. Fusion technology has tremendous potential in diagnostic imaging to detect numerous conditions such as tumors, Alzheimer's disease, dementia and neural disorders. The fusion of PET with CT helps in the localization of molecular abnormalities, thereby increasing diagnostic accuracy and differentiating benign or artefact lesions from malignant diseases. It uses a radiotracer called fluro deoxyglucose that gives a clear distinction between pathological and physiological uptake. Interest in this technology is increasing and additional clinical validation are likely to induce more health care providers to invest in combined scanners. It is hope that in time, a better appreciation of its advantages over conventional and traditional imaging modalities will be realized. The first PET CT facility in the country was established at the St. Luke's Medical Center in Quezon City in 2008 and has since then provided a state-of-the art imaging modality to its patients here and those from other countries. The paper will present the experiences so far gained from its operation, including the measures and steps currently taken by the facility to ensure optimum workers and patient safety. Plans and programs to further enhance the awareness of the Filipino public on this advanced imaging modality for an improved health care delivery system may also be discussed briefly. (author)

  13. Image inpainting based on stacked autoencoders

    International Nuclear Information System (INIS)

    Shcherbakov, O; Batishcheva, V

    2014-01-01

    Recently we have proposed the algorithm for the problem of image inpaiting (filling in occluded or damaged parts of images). This algorithm was based on the criterion spectrum entropy and showed promising results despite of using hand-crafted representation of images. In this paper, we present a method for solving image inpaiting task based on learning some image representation. Some results are shown to illustrate quality of image reconstruction.

  14. Transfer and conversion of images based on EIT in atom vapor.

    Science.gov (United States)

    Cao, Mingtao; Zhang, Liyun; Yu, Ya; Ye, Fengjuan; Wei, Dong; Guo, Wenge; Zhang, Shougang; Gao, Hong; Li, Fuli

    2014-05-01

    Transfer and conversion of images between different wavelengths or polarization has significant applications in optical communication and quantum information processing. We demonstrated the transfer of images based on electromagnetically induced transparency (EIT) in a rubidium vapor cell. In experiments, a 2D image generated by a spatial light modulator is used as a coupling field, and a plane wave served as a signal field. We found that the image carried by coupling field could be transferred to that carried by signal field, and the spatial patterns of transferred image are much better than that of the initial image. It also could be much smaller than that determined by the diffraction limit of the optical system. We also studied the subdiffraction propagation for the transferred image. Our results may have applications in quantum interference lithography and coherent Raman spectroscopy.

  15. Evidence based medical imaging (EBMI)

    International Nuclear Information System (INIS)

    Smith, Tony

    2008-01-01

    Background: The evidence based paradigm was first described about a decade ago. Previous authors have described a framework for the application of evidence based medicine which can be readily adapted to medical imaging practice. Purpose: This paper promotes the application of the evidence based framework in both the justification of the choice of examination type and the optimisation of the imaging technique used. Methods: The framework includes five integrated steps: framing a concise clinical question; searching for evidence to answer that question; critically appraising the evidence; applying the evidence in clinical practice; and, evaluating the use of revised practices. Results: This paper illustrates the use of the evidence based framework in medical imaging (that is, evidence based medical imaging) using the examples of two clinically relevant case studies. In doing so, a range of information technology and other resources available to medical imaging practitioners are identified with the intention of encouraging the application of the evidence based paradigm in radiography and radiology. Conclusion: There is a perceived need for radiographers and radiologists to make greater use of valid research evidence from the literature to inform their clinical practice and thus provide better quality services

  16. Psychoanalytic Bases for One's Image of God: Fact or Artifact?

    Science.gov (United States)

    Buri, John R.

    As a result of Freud's seminal postulations of the psychoanalytic bases for one's God-concept, it is a frequently accepted hypothesis that an individual's image of God is largely a reflection of experiences with and feelings toward one's own father. While such speculations as to an individual's phenomenological conceptions of God have an…

  17. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  18. MR-based full-body preventative cardiovascular and tumor imaging: technique and preliminary experience

    International Nuclear Information System (INIS)

    Goyen, Mathias; Goehde, Susanne C.; Herborn, Christoph U.; Hunold, Peter; Vogt, Florian M.; Gizewski, Elke R.; Lauenstein, Thomas C.; Ajaj, Waleed; Forsting, Michael; Debatin, Joerg F.; Ruehm, Stefan G.

    2004-01-01

    Recent improvements in hardware and software, lack of side effects, as well as diagnostic accuracy make magnetic resonance imaging a natural candidate for preventative imaging. Thus, the purpose of the study was to evaluate the feasibility of a comprehensive 60-min MR-based screening examination in healthy volunteers and a limited number of patients with known target disease. In ten healthy volunteers (7 men, 3 women; mean age, 32.4 years) and five patients (4 men, 1 woman; mean age, 56.2 years) with proven target disease we evaluated the performance of a comprehensive MR screening strategy by combining well-established organ-based MR examination components encompassing the brain, the arterial system, the heart, the lungs, and the colon. All ten volunteers and five patients tolerated the comprehensive MR examination well. The mean in-room time was 63 min. In one volunteer, insufficient colonic cleansing on the part of the volunteer diminished the diagnostic reliability of MR colonography. All remaining components of the comprehensive MR examination were considered diagnostic in all volunteers and patients. In the five patients, the examination revealed the known pathologies [aneurysm of the anterior communicating artery (n=1), renal artery stenosis (n=1), myocardial infarct (n=1), and colonic polyp (n=2)]. The outlined MR screening strategy encompassing the brain, the arterial system, the heart, the lung, and the colon is feasible. Further studies have to show that MR-based screening programs are cost-effective in terms of the life-years saved. (orig.)

  19. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded......, the images have been previously processed according to the Graph Based Visual Saliency model in order to keep either SIFT or SURF features corresponding to the actual monuments while the background “noise” is minimized. The application is then able to classify these images, helping the user to better...

  20. Predicting Visible Image Degradation by Colour Image Difference Formulae

    Institute of Scientific and Technical Information of China (English)

    Eriko Bando; Jon Y. Hardeberg; David Connah; Ivar Farup

    2004-01-01

    It carried out a CRT monitor based psychophysical experiment to investigate the quality of three colour image difference metrics, the CIEAE ab equation, the iCAM and the S-CIELAB metrics. Six original images were reproduced through six gamut mapping algorithms for the observer experiment. The result indicates that the colour image difference calculated by each metric does not directly relate to perceived image difference.

  1. Auto-Scaling of Geo-Based Image Processing in an OpenStack Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Sanggoo Kang

    2016-08-01

    Full Text Available Cloud computing is a base platform for the distribution of large volumes of data and high-performance image processing on the Web. Despite wide applications in Web-based services and their many benefits, geo-spatial applications based on cloud computing technology are still developing. Auto-scaling realizes automatic scalability, i.e., the scale-out and scale-in processing of virtual servers in a cloud computing environment. This study investigates the applicability of auto-scaling to geo-based image processing algorithms by comparing the performance of a single virtual server and multiple auto-scaled virtual servers under identical experimental conditions. In this study, the cloud computing environment is built with OpenStack, and four algorithms from the Orfeo toolbox are used for practical geo-based image processing experiments. The auto-scaling results from all experimental performance tests demonstrate applicable significance with respect to cloud utilization concerning response time. Auto-scaling contributes to the development of web-based satellite image application services using cloud-based technologies.

  2. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  3. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Whole slide images and digital media in pathology education, testing, and practice: the Oklahoma experience.

    Science.gov (United States)

    Fung, Kar-Ming; Hassell, Lewis A; Talbert, Michael L; Wiechmann, Allan F; Chaser, Brad E; Ramey, Joel

    2012-01-01

    Examination of glass slides is of paramount importance in pathology training. Until the introduction of digitized whole slide images that could be accessed through computer networks, the sharing of pathology slides was a major logistic issue in pathology education and practice. With the help of whole slide images, our department has developed several online pathology education websites. Based on a modular architecture, this program provides online access to whole slide images, still images, case studies, quizzes and didactic text at different levels. Together with traditional lectures and hands-on experiences, it forms the back bone of our histology and pathology education system for residents and medical students. The use of digitized whole slide images has a.lso greatly improved the communication between clinicians and pathologist in our institute.

  5. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  6. An improved three-dimensional non-scanning laser imaging system based on digital micromirror device

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Lei, Jieyu; Zhai, Yu; Timofeev, Alexander N.

    2018-01-01

    Nowadays, there are two main methods to realize three-dimensional non-scanning laser imaging detection, which are detection method based on APD and detection method based on Streak Tube. However, the detection method based on APD possesses some disadvantages, such as small number of pixels, big pixel interval and complex supporting circuit. The detection method based on Streak Tube possesses some disadvantages, such as big volume, bad reliability and high cost. In order to resolve the above questions, this paper proposes an improved three-dimensional non-scanning laser imaging system based on Digital Micromirror Device. In this imaging system, accurate control of laser beams and compact design of imaging structure are realized by several quarter-wave plates and a polarizing beam splitter. The remapping fiber optics is used to sample the image plane of receiving optical lens, and transform the image into line light resource, which can realize the non-scanning imaging principle. The Digital Micromirror Device is used to convert laser pulses from temporal domain to spatial domain. The CCD with strong sensitivity is used to detect the final reflected laser pulses. In this paper, we also use an algorithm which is used to simulate this improved laser imaging system. In the last, the simulated imaging experiment demonstrates that this improved laser imaging system can realize three-dimensional non-scanning laser imaging detection.

  7. Understanding God images and God concepts: Towards a pastoral hermeneutics of the God attachment experience

    Directory of Open Access Journals (Sweden)

    Victor Counted

    2015-03-01

    Full Text Available The author looks at the God image experience as an attachment relationship experience with God. Hence, arguing that the God image experience is borne originally out of a parent�child attachment contagion, in such a way that God is often represented in either secure or insecure attachment patterns. The article points out that insecure God images often develop head-to-head with God concepts in a believer�s emotional experience of God. On the other hand, the author describes God concepts as indicators of a religious faith and metaphorical standards for regulating insecure attachment patterns. The goals of this article, however, is to highlight the relationship between God images and God concepts, and to provide a hermeneutical process for interpreting and surviving the God image experience.Intradisciplinary and/or interdisciplinary implications: Given that most scholars within the discipline of Practical Theology discuss the subject of God images from cultural and theological perspectives, this article has discussed God images from an attachment perspective, which is a popular framework in psychology of religion. This is rare. The study is therefore interdisciplinary in this regards. The article further helps the reader to understand the intrapsychic process of the God image experience, and thus provides us with hermeneutical answers for dealing with the God image experience from methodologies grounded in Practical Theology and pastoral care.

  8. DENOTATIVE ORIGINS OF ABSTRACT IMAGES IN LINGUISTIC EXPERIMENT

    Directory of Open Access Journals (Sweden)

    Elina, E.

    2017-03-01

    Full Text Available The article discusses the refusal from denotation (the subject, as the basic principle of abstract images, and semiotic problems arising in connection with this principle: how to solve the contradiction between the pointlessness and iconic nature of the image? Is it correct in the absence of denotation to recognize abstract representation of a single-level entity? The solution is proposed to decide these questions with the help of a psycholinguistic experiment in which the verbal interpretation of abstract images made by both experienced and “naive” audience-recipients demonstrates the objectivity of perception of denotative “traces” and the presence of denotative invariant in an abstract form.

  9. Star tracking method based on multiexposure imaging for intensified star trackers.

    Science.gov (United States)

    Yu, Wenbo; Jiang, Jie; Zhang, Guangjun

    2017-07-20

    The requirements for the dynamic performance of star trackers are rapidly increasing with the development of space exploration technologies. However, insufficient knowledge of the angular acceleration has largely decreased the performance of the existing star tracking methods, and star trackers may even fail to track under highly dynamic conditions. This study proposes a star tracking method based on multiexposure imaging for intensified star trackers. The accurate estimation model of the complete motion parameters, including the angular velocity and angular acceleration, is established according to the working characteristic of multiexposure imaging. The estimation of the complete motion parameters is utilized to generate the predictive star image accurately. Therefore, the correct matching and tracking between stars in the real and predictive star images can be reliably accomplished under highly dynamic conditions. Simulations with specific dynamic conditions are conducted to verify the feasibility and effectiveness of the proposed method. Experiments with real starry night sky observation are also conducted for further verification. Simulations and experiments demonstrate that the proposed method is effective and shows excellent performance under highly dynamic conditions.

  10. Optical image hiding based on chaotic vibration of deformable moiré grating

    Science.gov (United States)

    Lu, Guangqing; Saunoriene, Loreta; Aleksiene, Sandra; Ragulskis, Minvydas

    2018-03-01

    Image hiding technique based on chaotic vibration of deformable moiré grating is presented in this paper. The embedded secret digital image is leaked in a form of a pattern of time-averaged moiré fringes when the deformable cover grating vibrates according to a chaotic law of motion with a predefined set of parameters. Computational experiments are used to demonstrate the features and the applicability of the proposed scheme.

  11. Color Texture Image Retrieval Based on Local Extrema Features and Riemannian Distance

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2017-10-01

    Full Text Available A novel efficient method for content-based image retrieval (CBIR is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints within the image. Then, dissimilarity measure between images is calculated based on the geometric distance between the topological feature spaces (i.e., manifolds formed by the sets of local descriptors generated from each image of the database. In this work, we propose to extract and use the local extrema pixels as our feature points. Then, the so-called local extrema-based descriptor (LED is generated for each keypoint by integrating all color, spatial as well as gradient information captured by its nearest local extrema. Hence, each image is encoded by an LED feature point cloud and Riemannian distances between these point clouds enable us to tackle CBIR. Experiments performed on several color texture databases including Vistex, STex, color Brodazt, USPtex and Outex TC-00013 using the proposed approach provide very efficient and competitive results compared to the state-of-the-art methods.

  12. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  13. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...... performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  14. Low-dose CT image reconstruction using gain intervention-based dictionary learning

    Science.gov (United States)

    Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra

    2018-05-01

    Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.

  15. An acceleration system for Laplacian image fusion based on SoC

    Science.gov (United States)

    Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng

    2018-04-01

    Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.

  16. Content-based image retrieval using a signature graph and a self-organizing map

    Directory of Open Access Journals (Sweden)

    Van Thanh The

    2016-06-01

    Full Text Available In order to effectively retrieve a large database of images, a method of creating an image retrieval system CBIR (contentbased image retrieval is applied based on a binary index which aims to describe features of an image object of interest. This index is called the binary signature and builds input data for the problem of matching similar images. To extract the object of interest, we propose an image segmentation method on the basis of low-level visual features including the color and texture of the image. These features are extracted at each block of the image by the discrete wavelet frame transform and the appropriate color space. On the basis of a segmented image, we create a binary signature to describe the location, color and shape of the objects of interest. In order to match similar images, we provide a similarity measure between the images based on binary signatures. Then, we present a CBIR model which combines a signature graph and a self-organizing map to cluster and store similar images. To illustrate the proposed method, experiments on image databases are reported, including COREL,Wang and MSRDI.

  17. Young adult women's experiences of body image after bariatric surgery

    DEFF Research Database (Denmark)

    Jensen, Janet F; Hoegh-Petersen, Mette; Larsen, Tine B

    2014-01-01

    AIM: To understand the lived experience of body image in young women after obesity surgery. BACKGROUND: Quantitative studies have documented that health-related quality of life and body image are improved after bariatric surgery, probably due to significant weight loss. Female obesity surgery...... candidates are likely to be motivated by dissatisfaction regarding physical appearance. However, little is known about the experience of the individual woman, leaving little understanding of the association between bariatric surgery and changes in health-related quality of life and body image. DESIGN...... analysed by systematic text condensation influenced by Giorgi's phenomenological method and supplemented by elements from narrative analysis. FINDINGS: The analysis revealed three concepts: solution to an unbearable problem, learning new boundaries and hopes of normalization. These revelatory concepts were...

  18. X-ray crystal imagers for inertial confinement fusion experiments (invited)

    International Nuclear Information System (INIS)

    Aglitskiy, Y.; Lehecka, T.; Obenschain, S.; Pawley, C.; Brown, C.M.; Seely, J.

    1999-01-01

    We report on our continued development of high resolution monochromatic x-ray imaging system based on spherically curved crystals. This system can be extensively used in the relevant experiments of the inertial confinement fusion (ICF) program. The system is currently used, but not limited to diagnostics of the targets ablatively accelerated by the Nike KrF laser. A spherically curved quartz crystal (2d=6.68703 Angstrom, R=200mm) has been used to produce monochromatic backlit images with the He-like Si resonance line (1865 eV) as the source of radiation. Another quartz crystal (2d=8.5099 Angstrom, R=200mm) with the H-like Mg resonance line (1473 eV) has been used for backlit imaging with higher contrast. The spatial resolution of the x-ray optical system is 1.7 μm in selected places and 2 - 3 μm over a larger area. A second crystal with a separate backlighter was added to the imaging system. This makes it possible to make use of all four strips of the framing camera. Time resolved, 20x magnified, backlit monochromatic images of CH planar targets driven by the Nike facility have been obtained with spatial resolution of 2.5 μm in selected places and 5 μm over the focal spot of the Nike laser. We are exploring the enhancement of this technique to the higher and lower backlighter energies. copyright 1999 American Institute of Physics

  19. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    Science.gov (United States)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  20. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  1. Hepatic trauma: CT findings and considerations based on our experience in emergency diagnostic imaging

    International Nuclear Information System (INIS)

    Romano, Luigia; Giovine, Sabrina; Guidi, Guido; Tortora, Giovanni; Cinque, Teresa; Romano, Stefania

    2004-01-01

    and peritoneal fluid evaluation may be used to make a first differentiation of severity of lesions, but haemodynamic parameters may help the clinician to prefer a conservative treatment. In emergency based hospitals and also in our experience, positive benefits spring from diagnostic accuracy and consequent correct therapeutic management

  2. Design of a space-based infrared imaging interferometer

    Science.gov (United States)

    Hart, Michael; Hope, Douglas; Romeo, Robert

    2017-07-01

    Present space-based optical imaging sensors are expensive. Launch costs are dictated by weight and size, and system design must take into account the low fault tolerance of a system that cannot be readily accessed once deployed. We describe the design and first prototype of the space-based infrared imaging interferometer (SIRII) that aims to mitigate several aspects of the cost challenge. SIRII is a six-element Fizeau interferometer intended to operate in the short-wave and midwave IR spectral regions over a 6×6 mrad field of view. The volume is smaller by a factor of three than a filled-aperture telescope with equivalent resolving power. The structure and primary optics are fabricated from light-weight space-qualified carbon fiber reinforced polymer; they are easy to replicate and inexpensive. The design is intended to permit one-time alignment during assembly, with no need for further adjustment once on orbit. A three-element prototype of the SIRII imager has been constructed with a unit telescope primary mirror diameter of 165 mm and edge-to-edge baseline of 540 mm. The optics, structure, and interferometric signal processing principles draw on experience developed in ground-based astronomical applications designed to yield the highest sensitivity and resolution with cost-effective optical solutions. The initial motivation for the development of SIRII was the long-term collection of technical intelligence from geosynchronous orbit, but the scalable nature of the design will likely make it suitable for a range of IR imaging scenarios.

  3. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    Science.gov (United States)

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  4. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    Science.gov (United States)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  5. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  6. Individual Building Extraction from TerraSAR-X Images Based on Ontological Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Rong Gui

    2016-08-01

    Full Text Available Accurate building information plays a crucial role for urban planning, human settlements and environmental management. Synthetic aperture radar (SAR images, which deliver images with metric resolution, allow for analyzing and extracting detailed information on urban areas. In this paper, we consider the problem of extracting individual buildings from SAR images based on domain ontology. By analyzing a building scattering model with different orientations and structures, the building ontology model is set up to express multiple characteristics of individual buildings. Under this semantic expression framework, an object-based SAR image segmentation method is adopted to provide homogeneous image objects, and three categories of image object features are extracted. Semantic rules are implemented by organizing image object features, and the individual building objects expression based on an ontological semantic description is formed. Finally, the building primitives are used to detect buildings among the available image objects. Experiments on TerraSAR-X images of Foshan city, China, with a spatial resolution of 1.25 m × 1.25 m, have shown the total extraction rates are above 84%. The results indicate the ontological semantic method can exactly extract flat-roof and gable-roof buildings larger than 250 pixels with different orientations.

  7. elastix: a toolbox for intensity-based medical image registration.

    Science.gov (United States)

    Klein, Stefan; Staring, Marius; Murphy, Keelin; Viergever, Max A; Pluim, Josien P W

    2010-01-01

    Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.

  8. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  9. Image-based Fuzzy Parking Control of a Car-like Mobile Robot

    Directory of Open Access Journals (Sweden)

    Yin Yin Aye

    2017-03-01

    Full Text Available This paper develops a novel automatic parking system using an image-based fuzzy controller, where in the reasoning the slope and intercept of the desired target line are used for the inputs, and the steering angle of the robot is generated for the output. The objective of this study is that a robot equipped with a camera detects a rectangular parking frame, which is drawn on the floor, based on image processing. The desired target line to be followed by the robot is generated by using Hough transform from a captured image. The fuzzy controller is designed according to experiments of skilled driver, and the fuzzy rules are tuned and the fuzzy membership functions are optimized by experimentally for output. The effectiveness of the proposed method is demonstrated through some experimental results with an actual mobile robot

  10. Correction method and software for image distortion and nonuniform response in charge-coupled device-based x-ray detectors utilizing x-ray image intensifier

    International Nuclear Information System (INIS)

    Ito, Kazuki; Kamikubo, Hironari; Yagi, Naoto; Amemiya, Yoshiyuki

    2005-01-01

    An on-site method of correcting the image distortion and nonuniform response of a charge-coupled device (CCD)-based X-ray detector was developed using the response of the imaging plate as a reference. The CCD-based X-ray detector consists of a beryllium-windowed X-ray image intensifier (Be-XRII) and a CCD as the image sensor. An image distortion of 29% was improved to less than 1% after the correction. In the correction of nonuniform response due to image distortion, subpixel approximation was performed for the redistribution of pixel values. The optimal number of subpixels was also discussed. In an experiment with polystyrene (PS) latex, it was verified that the correction of both image distortion and nonuniform response worked properly. The correction for the 'contrast reduction' problem was also demonstrated for an isotropic X-ray scattering pattern from the PS latex. (author)

  11. Voxel-based clustered imaging by multiparameter diffusion tensor images for glioma grading.

    Science.gov (United States)

    Inano, Rika; Oishi, Naoya; Kunieda, Takeharu; Arakawa, Yoshiki; Yamao, Yukihiro; Shibata, Sumiya; Kikuchi, Takayuki; Fukuyama, Hidenao; Miyamoto, Susumu

    2014-01-01

    Gliomas are the most common intra-axial primary brain tumour; therefore, predicting glioma grade would influence therapeutic strategies. Although several methods based on single or multiple parameters from diagnostic images exist, a definitive method for pre-operatively determining glioma grade remains unknown. We aimed to develop an unsupervised method using multiple parameters from pre-operative diffusion tensor images for obtaining a clustered image that could enable visual grading of gliomas. Fourteen patients with low-grade gliomas and 19 with high-grade gliomas underwent diffusion tensor imaging and three-dimensional T1-weighted magnetic resonance imaging before tumour resection. Seven features including diffusion-weighted imaging, fractional anisotropy, first eigenvalue, second eigenvalue, third eigenvalue, mean diffusivity and raw T2 signal with no diffusion weighting, were extracted as multiple parameters from diffusion tensor imaging. We developed a two-level clustering approach for a self-organizing map followed by the K-means algorithm to enable unsupervised clustering of a large number of input vectors with the seven features for the whole brain. The vectors were grouped by the self-organizing map as protoclusters, which were classified into the smaller number of clusters by K-means to make a voxel-based diffusion tensor-based clustered image. Furthermore, we also determined if the diffusion tensor-based clustered image was really helpful for predicting pre-operative glioma grade in a supervised manner. The ratio of each class in the diffusion tensor-based clustered images was calculated from the regions of interest manually traced on the diffusion tensor imaging space, and the common logarithmic ratio scales were calculated. We then applied support vector machine as a classifier for distinguishing between low- and high-grade gliomas. Consequently, the sensitivity, specificity, accuracy and area under the curve of receiver operating characteristic

  12. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  13. Deep Learning- and Transfer Learning-Based Super Resolution Reconstruction from Single Medical Image

    Directory of Open Access Journals (Sweden)

    YiNan Zhang

    2017-01-01

    Full Text Available Medical images play an important role in medical diagnosis and research. In this paper, a transfer learning- and deep learning-based super resolution reconstruction method is introduced. The proposed method contains one bicubic interpolation template layer and two convolutional layers. The bicubic interpolation template layer is prefixed by mathematics deduction, and two convolutional layers learn from training samples. For saving training medical images, a SIFT feature-based transfer learning method is proposed. Not only can medical images be used to train the proposed method, but also other types of images can be added into training dataset selectively. In empirical experiments, results of eight distinctive medical images show improvement of image quality and time reduction. Further, the proposed method also produces slightly sharper edges than other deep learning approaches in less time and it is projected that the hybrid architecture of prefixed template layer and unfixed hidden layers has potentials in other applications.

  14. Computer-aided diagnostics of screening mammography using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo

    2012-03-01

    Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.

  15. Hierarchical and successive approximate registration of the non-rigid medical image based on thin-plate splines

    Science.gov (United States)

    Hu, Jinyan; Li, Li; Yang, Yunfeng

    2017-06-01

    The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.

  16. Detection rates in pediatric diagnostic imaging: a picture archive and communication system compared with a web-based imaging system

    International Nuclear Information System (INIS)

    McDonald, L.; Cramer, B.; Barrett, B.

    2006-01-01

    This prospective study assesses whether there are differences in accuracy of interpretation of diagnostic images among users of a picture archive and communication system (PACS) diagnostic workstation, compared with a less costly Web-based imaging system on a personal computer (PC) with a high resolution monitor. One hundred consecutive pediatric chest or abdomen and skeletal X-rays were selected from hospital inpatient and outpatient studies over a 5-month interval. They were classified as normal (n = 32), obviously abnormal (n = 33), or having subtle abnormal findings (n = 35) by 2 senior radiologists who reached a consensus for each individual case. Subsequently, 5 raters with varying degrees of experience independently viewed and interpreted the cases as normal or abnormal. Raters viewed each image 1 month apart on a PACS and on the Web-based PC imaging system. There was no relation between accuracy of detection and the system used to evaluate X-ray images (P = 0.92). The total percentage of incorrect interpretations on the Web-based PC imaging system was 23.2%, compared with 23.6% on the PACS (P = 0.92). For all raters combined, the overall difference in proportion assessed incorrectly on the PACS, compared with the PC system, was not significant at 0.4% (95%CI, -3.5% to 4.3%). The high-resolution Web-based imaging system via PC is an adequate alternative to a PACS clinical workstation. Accordingly, the provision of a more extensive network of workstations throughout the hospital setting could have potentially significant cost savings. (author)

  17. Bidirectional composition on lie groups for gradient-based image alignment.

    Science.gov (United States)

    Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick

    2010-09-01

    In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.

  18. A SVD Based Image Complexity Measure

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2009-01-01

    Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important...... problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly...... collected DIKU Multi-Scale image database....

  19. Evaluation of an image-based tracking workflow with Kalman filtering for automatic image plane alignment in interventional MRI.

    Science.gov (United States)

    Neumann, M; Cuvillon, L; Breton, E; de Matheli, M

    2013-01-01

    Recently, a workflow for magnetic resonance (MR) image plane alignment based on tracking in real-time MR images was introduced. The workflow is based on a tracking device composed of 2 resonant micro-coils and a passive marker, and allows for tracking of the passive marker in clinical real-time images and automatic (re-)initialization using the microcoils. As the Kalman filter has proven its benefit as an estimator and predictor, it is well suited for use in tracking applications. In this paper, a Kalman filter is integrated in the previously developed workflow in order to predict position and orientation of the tracking device. Measurement noise covariances of the Kalman filter are dynamically changed in order to take into account that, according to the image plane orientation, only a subset of the 3D pose components is available. The improved tracking performance of the Kalman extended workflow could be quantified in simulation results. Also, a first experiment in the MRI scanner was performed but without quantitative results yet.

  20. Compartmental analysis, imaging techniques and population pharmacokinetic. Experiences at CENTIS

    International Nuclear Information System (INIS)

    Hernández, Ignacio; León, Mariela; Leyva, Rene; Castro, Yusniel; Ayra, Fernando E.

    2016-01-01

    Introduction: In pharmacokinetic evaluation small rodents are used in a large extend. Traditional pharmacokinetic evaluations by the two steps approach can be replaced by the sparse data design which may also represent a complicated situation to evaluate satisfactorily from the statistical point of view. In this presentation different situations of sparse data sampling are analyzed based on practical consideration. Non linear mixed effect model was selected in order to estimate pharmacokinetic parameters in simulated data from real experimental results using blood sampling and imaging procedures. Materials and methods: Different scenarios representing several experimental designs of incomplete individual profiles were evaluated. Data sets were simulated based on real data from previous experiments. In all cases three to five blood samples were considered per time point. A combination of compartmental analysis with tumor uptake obtained by gammagraphy of radiolabeled drugs is also evaluated.All pharmacokinetic profiles were analyzed by means of MONOLIX software version 4.2.3. Results: All sampling schedules yield the same results when computed using the MONOLIX software and the SAEM algorithm. Population and individual pharmacokinetic parameters were accurately estimated with three or five determination per sampling point. According with the used methodology and software tool, it can be an expected result, but demonstrating the method performance in such situations, allow us to select a more flexible design using a very small number of animals in preclinical research. The combination with imaging procedures also allows us to construct a completely structured compartmental analysis. Results of real experiments are presented demonstrating the versatility of used methodology in different evaluations. The same sampling approach can be considered in phase I or II clinical trials. (author)

  1. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  2. Catheter-based time-gated near-infrared fluorescence/OCT imaging system

    Science.gov (United States)

    Lu, Yuankang; Abran, Maxime; Cloutier, Guy; Lesage, Frédéric

    2018-02-01

    We developed a new dual-modality intravascular imaging system based on fast time-gated fluorescence intensity imaging and spectral domain optical coherence tomography (SD-OCT) for the purpose of interventional detection of atherosclerosis. A pulsed supercontinuum laser was used for fluorescence and OCT imaging. A double-clad fiber (DCF)- based side-firing catheter was designed and fabricated to have a 23 μm spot size at a 2.2 mm working distance for OCT imaging. Its single-mode core is used for OCT, while its inner cladding transports fluorescence excitation light and collects fluorescent photons. The combination of OCT and fluorescence imaging was achieved by using a DCF coupler. For fluorescence detection, we used a time-gated technique with a novel single-photon avalanche diode (SPAD) working in an ultra-fast gating mode. A custom-made delay chip was integrated in the system to adjust the delay between the excitation laser pulse and the SPAD gate-ON window. This technique allowed to detect fluorescent photons of interest while rejecting most of the background photons, thus leading to a significantly improved signal to noise ratio (SNR). Experiments were carried out in turbid media mimicking tissue with an indocyanine green (ICG) inclusion (1 mM and 100 μM) to compare the time-gated technique and the conventional continuous detection technique. The gating technique increased twofold depth sensitivity, and tenfold SNR at large distances. The dual-modality imaging capacity of our system was also validated with a silicone-based tissue-mimicking phantom.

  3. A camac based data acquisition system for flat-panel image array readout

    International Nuclear Information System (INIS)

    Morton, E.J.; Antonuk, L.E.; Berry, J.E.; Huang, W.; Mody, P.; Yorkston, J.; Longo, M.J.

    1993-01-01

    A readout system has been developed to facilitate the digitization and subsequent display of image data from two-dimensional, pixellated, flat-panel, amorphous silicon imaging arrays. These arrays have been designed specifically for medical x-ray imaging applications. The readout system is based on hardware and software developed for various experiments at CERN and Fermi National Accelerator Laboratory. Additional analog signal processing and digital control electronics were constructed specifically for this application. The authors report on the form of the resulting data acquisition system, discuss aspects of its performance, and consider the compromises which were involved in its design

  4. The patient experience of high technology medical imaging: A systematic review of the qualitative evidence

    International Nuclear Information System (INIS)

    Munn, Zachary; Jordan, Zoe

    2011-01-01

    Background: When presenting to an imaging department, the person who is to be imaged is often in a vulnerable state, and can experience the scan in a number of ways. It is the role of the radiographer to produce a high quality image and facilitate patient care throughout the imaging process. A qualitative systematic review was performed to synthesise the existent evidence on the patient experience of high technology medical imaging. Only papers relating to Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) were identified. Inclusion criteria: Studies that were of a qualitative design that explored the phenomenon of interest, the patient experience of high technology medical imaging. Participants included anyone who had undergone one of these procedures. Methods: A systematic search of medical and allied health databases was conducted. Articles identified during the search process that met the inclusion criteria were then critically appraised for methodological quality independently by two reviewers. Results: During the search and inclusion process, 15 studies were found that were deemed of suitable quality to be included in the review. From the 15 studies, 127 findings were extracted from the included studies. These were analysed in more detail to observe common themes, and then grouped into 33 categories. From these 33 categories, 11 synthesised findings were produced. The 11 synthesised findings highlight the diverse, unique and challenging ways in which people experience imaging with MRI and CT scanners. Conclusion: The results of the review demonstrate the diverse ways in which people experience medical imaging. All health professionals involved in imaging need to be aware of the different ways each patient may experience imaging.

  5. Tie Points Extraction for SAR Images Based on Differential Constraints

    Science.gov (United States)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  6. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  7. A kind of color image segmentation algorithm based on super-pixel and PCNN

    Science.gov (United States)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  8. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

    2011-01-01

    The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

  9. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments.

    Directory of Open Access Journals (Sweden)

    Christian Carsten Sachs

    Full Text Available Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool.We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks.Presented is the software molyso, a ready-to-use open source software (BSD-licensed for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso.

  10. Fast method of constructing image correlations to build a free network based on image multivocabulary trees

    Science.gov (United States)

    Zhan, Zongqian; Wang, Xin; Wei, Minglu

    2015-05-01

    In image-based three-dimensional (3-D) reconstruction, one topic of growing importance is how to quickly obtain a 3-D model from a large number of images. The retrieval of the correct and relevant images for the model poses a considerable technological challenge. The "image vocabulary tree" has been proposed as a method to search for similar images. However, a significant drawback of this approach is identified in its low time efficiency and barely satisfactory classification result. The method proposed is inspired by, and improves upon, some recent methods. Specifically, vocabulary quality is considered and multivocabulary trees are designed to improve the classification result. A marked improvement was, indeed, observed in our evaluation of the proposed method. To improve time efficiency, graphics processing unit (GPU) computer unified device architecture parallel computation is applied in the multivocabulary trees. The results of the experiments showed that the GPU was three to four times more efficient than the enumeration matching and CPU methods when the number of images is large. This paper presents a reliable reference method for the rapid construction of a free network to be used for the computing of 3-D information.

  11. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  12. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    Science.gov (United States)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  13. IMAGE-BASED RECONSTRUCTION AND ANALYSIS OF DYNAMIC SCENES IN A LANDSLIDE SIMULATION FACILITY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2017-12-01

    Full Text Available The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  14. Photon-counting-based diffraction phase microscopy combined with single-pixel imaging

    Science.gov (United States)

    Shibuya, Kyuki; Araki, Hiroyuki; Iwata, Tetsuo

    2018-04-01

    We propose a photon-counting (PC)-based quantitative-phase imaging (QPI) method for use in diffraction phase microscopy (DPM) that is combined with a single-pixel imaging (SPI) scheme (PC-SPI-DPM). This combination of DPM with the SPI scheme overcomes a low optical throughput problem that has occasionally prevented us from obtaining quantitative-phase images in DPM through use of a high-sensitivity single-channel photodetector such as a photomultiplier tube (PMT). The introduction of a PMT allowed us to perform PC with ease and thus solved a dynamic range problem that was inherent to SPI. As a proof-of-principle experiment, we performed a comparison study of analogue-based SPI-DPM and PC-SPI-DPM for a 125-nm-thick indium tin oxide (ITO) layer coated on a silica glass substrate. We discuss the basic performance of the method and potential future modifications of the proposed system.

  15. Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing

    International Nuclear Information System (INIS)

    Chen Yang; Shi Luyao; Shu Huazhong; Luo Limin; Coatrieux, Jean-Louis; Yin Xindao; Toumoulin, Christine

    2013-01-01

    In abdomen computed tomography (CT), repeated radiation exposures are often inevitable for cancer patients who receive surgery or radiotherapy guided by CT images. Low-dose scans should thus be considered in order to avoid the harm of accumulative x-ray radiation. This work is aimed at improving abdomen tumor CT images from low-dose scans by using a fast dictionary learning (DL) based processing. Stemming from sparse representation theory, the proposed patch-based DL approach allows effective suppression of both mottled noise and streak artifacts. The experiments carried out on clinical data show that the proposed method brings encouraging improvements in abdomen low-dose CT images with tumors. (paper)

  16. Retinal image quality assessment based on image clarity and content

    Science.gov (United States)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  17. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  18. Fractal Image Coding Based on a Fitting Surface

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2014-01-01

    Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.

  19. A novel lobster-eye imaging system based on Schmidt-type objective for X-ray-backscattering inspection

    International Nuclear Information System (INIS)

    Xu, Jie; Wang, Xin; Zhan, Qi; Huang, Shengling; Chen, Yifan; Mu, Baozhong

    2016-01-01

    This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system based on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.

  20. Evaluation of an image-based tracking workflow using a passive marker and resonant micro-coil fiducials for automatic image plane alignment in interventional MRI.

    Science.gov (United States)

    Neumann, M; Breton, E; Cuvillon, L; Pan, L; Lorenz, C H; de Mathelin, M

    2012-01-01

    In this paper, an original workflow is presented for MR image plane alignment based on tracking in real-time MR images. A test device consisting of two resonant micro-coils and a passive marker is proposed for detection using image-based algorithms. Micro-coils allow for automated initialization of the object detection in dedicated low flip angle projection images; then the passive marker is tracked in clinical real-time MR images, with alternation between two oblique orthogonal image planes along the test device axis; in case the passive marker is lost in real-time images, the workflow is reinitialized. The proposed workflow was designed to minimize dedicated acquisition time to a single dedicated acquisition in the ideal case (no reinitialization required). First experiments have shown promising results for test-device tracking precision, with a mean position error of 0.79 mm and a mean orientation error of 0.24°.

  1. A MEMS-based heating holder for the direct imaging of simultaneous in-situ heating and biasing experiments in scanning/transmission electron microscopes.

    Science.gov (United States)

    Mele, Luigi; Konings, Stan; Dona, Pleun; Evertz, Francis; Mitterbauer, Christoph; Faber, Pybe; Schampers, Ruud; Jinschek, Joerg R

    2016-04-01

    The introduction of scanning/transmission electron microscopes (S/TEM) with sub-Angstrom resolution as well as fast and sensitive detection solutions support direct observation of dynamic phenomena in-situ at the atomic scale. Thereby, in-situ specimen holders play a crucial role: accurate control of the applied in-situ stimulus on the nanostructure combined with the overall system stability to assure atomic resolution are paramount for a successful in-situ S/TEM experiment. For those reasons, MEMS-based TEM sample holders are becoming one of the preferred choices, also enabling a high precision in measurements of the in-situ parameter for more reproducible data. A newly developed MEMS-based microheater is presented in combination with the new NanoEx™-i/v TEM sample holder. The concept is built on a four-point probe temperature measurement approach allowing active, accurate local temperature control as well as calorimetry. In this paper, it is shown that it provides high temperature stability up to 1,300°C with a peak temperature of 1,500°C (also working accurately in gaseous environments), high temperature measurement accuracy (in-situ S/TEM imaging experiments, but also elemental mapping at elevated temperatures using energy-dispersive X-ray spectroscopy (EDS). Moreover, it has the unique capability to enable simultaneous heating and biasing experiments. © 2016 Wiley Periodicals, Inc.

  2. SPMK AND GRABCUT BASED TARGET EXTRACTION FROM HIGH RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    W. Cui

    2016-06-01

    Full Text Available Target detection and extraction from high resolution remote sensing images is a basic and wide needed application. In this paper, to improve the efficiency of image interpretation, we propose a detection and segmentation combined method to realize semi-automatic target extraction. We introduce the dense transform color scale invariant feature transform (TC-SIFT descriptor and the histogram of oriented gradients (HOG & HSV descriptor to characterize the spatial structure and color information of the targets. With the k-means cluster method, we get the bag of visual words, and then, we adopt three levels’ spatial pyramid (SP to represent the target patch. After gathering lots of different kinds of target image patches from many high resolution UAV images, and using the TC-SIFT-SP and the multi-scale HOG & HSV feature, we constructed the SVM classifier to detect the target. In this paper, we take buildings as the targets. Experiment results show that the target detection accuracy of buildings can reach to above 90%. Based on the detection results which are a series of rectangle regions of the targets. We select the rectangle regions as candidates for foreground and adopt the GrabCut based and boundary regularized semi-auto interactive segmentation algorithm to get the accurate boundary of the target. Experiment results show its accuracy and efficiency. It can be an effective way for some special targets extraction.

  3. Spmk and Grabcut Based Target Extraction from High Resolution Remote Sensing Images

    Science.gov (United States)

    Cui, Weihong; Wang, Guofeng; Feng, Chenyi; Zheng, Yiwei; Li, Jonathan; Zhang, Yi

    2016-06-01

    Target detection and extraction from high resolution remote sensing images is a basic and wide needed application. In this paper, to improve the efficiency of image interpretation, we propose a detection and segmentation combined method to realize semi-automatic target extraction. We introduce the dense transform color scale invariant feature transform (TC-SIFT) descriptor and the histogram of oriented gradients (HOG) & HSV descriptor to characterize the spatial structure and color information of the targets. With the k-means cluster method, we get the bag of visual words, and then, we adopt three levels' spatial pyramid (SP) to represent the target patch. After gathering lots of different kinds of target image patches from many high resolution UAV images, and using the TC-SIFT-SP and the multi-scale HOG & HSV feature, we constructed the SVM classifier to detect the target. In this paper, we take buildings as the targets. Experiment results show that the target detection accuracy of buildings can reach to above 90%. Based on the detection results which are a series of rectangle regions of the targets. We select the rectangle regions as candidates for foreground and adopt the GrabCut based and boundary regularized semi-auto interactive segmentation algorithm to get the accurate boundary of the target. Experiment results show its accuracy and efficiency. It can be an effective way for some special targets extraction.

  4. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric

  5. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  6. Preliminary energy-filtering neutron imaging with time-of-flight method on PKUNIFTY: A compact accelerator based neutron imaging facility at Peking University

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hu; Zou, Yubin, E-mail: zouyubin@pku.edu.cn; Wen, Weiwei; Lu, Yuanrong; Guo, Zhiyu

    2016-07-01

    Peking University Neutron Imaging Facility (PKUNIFTY) works on an accelerator–based neutron source with a repetition period of 10 ms and pulse duration of 0.4 ms, which has a rather low Cd ratio. To improve the effective Cd ratio and thus improve the detection capability of the facility, energy-filtering neutron imaging was realized with the intensified CCD camera and time-of-flight (TOF) method. Time structure of the pulsed neutron source was firstly simulated with Geant4, and the simulation result was evaluated with experiment. Both simulation and experiment results indicated that fast neutrons and epithermal neutrons were concentrated in the first 0.8 ms of each pulse period; meanwhile in the period of 0.8–2.0 ms only thermal neutrons existed. Based on this result, neutron images with and without energy filtering were acquired respectively, and it showed that detection capability of PKUNIFTY was improved with setting the exposure interval as 0.8–2.0 ms, especially for materials with strong moderating capability.

  7. TESIS experiment on EUV imaging spectroscopy of the Sun

    Science.gov (United States)

    Kuzin, S. V.; Bogachev, S. A.; Zhitnik, I. A.; Pertsov, A. A.; Ignatiev, A. P.; Mitrofanov, A. M.; Slemzin, V. A.; Shestov, S. V.; Sukhodrev, N. K.; Bugaenko, O. I.

    2009-03-01

    TESIS is a set of solar imaging instruments in development by the Lebedev Physical Institute of the Russian Academy of Science, to be launched aboard the Russian spacecraft CORONAS-PHOTON in December 2008. The main goal of TESIS is to provide complex observations of solar active phenomena from the transition region to the inner and outer solar corona with high spatial, spectral and temporal resolution in the EUV and Soft X-ray spectral bands. TESIS includes five unique space instruments: the MgXII Imaging Spectroheliometer (MISH) with spherical bent crystal mirror, for observations of the Sun in the monochromatic MgXII 8.42 Å line; the EUV Spectoheliometer (EUSH) with grazing incidence difraction grating, for the registration of the full solar disc in monochromatic lines of the spectral band 280-330 Å; two Full-disk EUV Telescopes (FET) with multilayer mirrors covering the band 130-136 and 290-320 Å; and the Solar EUV Coronagraph (SEC), based on the Ritchey-Chretien scheme, to observe the inner and outer solar corona from 0.2 to 4 solar radii in spectral band 290-320 Å. TESIS experiment will start at the rising phase of the 24th cycle of solar activity. With the advanced capabilities of its instruments, TESIS will help better understand the physics of solar flares and high-energy phenomena and provide new data on parameters of solar plasma in the temperature range 10-10K. This paper gives a brief description of the experiment, its equipment, and its scientific objectives.

  8. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    Science.gov (United States)

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Remote Sensing Image Fusion Based on the Combination Grey Absolute Correlation Degree and IHS Transform

    Directory of Open Access Journals (Sweden)

    Hui LIN

    2014-12-01

    Full Text Available An improved fusion algorithm for multi-source remote sensing images with high spatial resolution and multi-spectral capacity is proposed based on traditional IHS fusion and grey correlation analysis. Firstly, grey absolute correlation degree is used to discriminate non-edge pixels and edge pixels in high-spatial resolution images, by which the weight of intensity component is identified in order to combine it with high-spatial resolution image. Therefore, image fusion is achieved using IHS inverse transform. The proposed method is applied to ETM+ multi-spectral images and panchromatic image, and Quickbird’s multi-spectral images and panchromatic image respectively. The experiments prove that the fusion method proposed in the paper can efficiently preserve spectral information of the original multi-spectral images while enhancing spatial resolution greatly. By comparison and analysis, the proposed fusion algorithm is better than traditional IHS fusion and fusion method based on grey correlation analysis and IHS transform.

  10. Research of image retrieval technology based on color feature

    Science.gov (United States)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.

  11. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  12. Morphological observation and analysis using automated image cytometry for the comparison of trypan blue and fluorescence-based viability detection method.

    Science.gov (United States)

    Chan, Leo Li-Ying; Kuksin, Dmitry; Laverty, Daniel J; Saldi, Stephanie; Qiu, Jean

    2015-05-01

    The ability to accurately determine cell viability is essential to performing a well-controlled biological experiment. Typical experiments range from standard cell culturing to advanced cell-based assays that may require cell viability measurement for downstream experiments. The traditional cell viability measurement method has been the trypan blue (TB) exclusion assay. However, since the introduction of fluorescence-based dyes for cell viability measurement using flow or image-based cytometry systems, there have been numerous publications comparing the two detection methods. Although previous studies have shown discrepancies between TB exclusion and fluorescence-based viability measurements, image-based morphological analysis was not performed in order to examine the viability discrepancies. In this work, we compared TB exclusion and fluorescence-based viability detection methods using image cytometry to observe morphological changes due to the effect of TB on dead cells. Imaging results showed that as the viability of a naturally-dying Jurkat cell sample decreased below 70 %, many TB-stained cells began to exhibit non-uniform morphological characteristics. Dead cells with these characteristics may be difficult to count under light microscopy, thus generating an artificially higher viability measurement compared to fluorescence-based method. These morphological observations can potentially explain the differences in viability measurement between the two methods.

  13. Beam-hardening correction in CT based on basis image and TV model

    International Nuclear Information System (INIS)

    Li Qingliang; Yan Bin; Li Lei; Sun Hongsheng; Zhang Feng

    2012-01-01

    In X-ray computed tomography, the beam hardening leads to artifacts and reduces the image quality. It analyzes how beam hardening influences on original projection. According, it puts forward a kind of new beam-hardening correction method based on the basis images and TV model. Firstly, according to physical characteristics of the beam hardening an preliminary correction model with adjustable parameters is set up. Secondly, using different parameters, original projections are operated by the correction model. Thirdly, the projections are reconstructed to obtain a series of basis images. Finally, the linear combination of basis images is the final reconstruction image. Here, with total variation for the final reconstruction image as the cost function, the linear combination coefficients for the basis images are determined according to iterative method. To verify the effectiveness of the proposed method, the experiments are carried out on real phantom and industrial part. The results show that the algorithm significantly inhibits cup and strip artifacts in CT image. (authors)

  14. The fast iris image clarity evaluation based on Tenengrad and ROI selection

    Science.gov (United States)

    Gao, Shuqin; Han, Min; Cheng, Xu

    2018-04-01

    In iris recognition system, the clarity of iris image is an important factor that influences recognition effect. In the process of recognition, the blurred image may possibly be rejected by the automatic iris recognition system, which will lead to the failure of identification. Therefore it is necessary to evaluate the iris image definition before recognition. Considered the existing evaluation methods on iris image definition, we proposed a fast algorithm to evaluate the definition of iris image in this paper. In our algorithm, firstly ROI (Region of Interest) is extracted based on the reference point which is determined by using the feature of the light spots within the pupil, then Tenengrad operator is used to evaluate the iris image's definition. Experiment results show that, the iris image definition algorithm proposed in this paper could accurately distinguish the iris images of different clarity, and the algorithm has the merit of low computational complexity and more effectiveness.

  15. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  16. Image-based occupancy sensor

    Science.gov (United States)

    Polese, Luigi Gentile; Brackney, Larry

    2015-05-19

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.

  17. Studying a free fall experiment using short sequences of images

    International Nuclear Information System (INIS)

    Vera, Francisco; Romanque, Cristian

    2008-01-01

    We discuss a new alternative for obtaining position and time coordinates from a video of a free fall experiment. In our approach, after converting the video to a short sequence of images, the images are analyzed using a web page application developed by the author. The main advantage of the setup explained in this work, is that it is simple to use, no software license fees are necessary, and can be scaled-up to be used by a big number of students in introductory physics courses. The steps involved in the full analysis of a falling object are: we grab a short digital video of the experiment and convert it to a sequence of images, then, using a web page that includes all the necessary javascript, the student can easily click on the object of interest to obtain the (x,y,t) coordinates, finally, the student analyze motion using a spreadsheet.

  18. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  19. Optical image hiding based on interference

    Science.gov (United States)

    Zhang, Yan; Wang, Bo

    2009-11-01

    Optical image processing has been paid a lot of attentions recently due to its large capacitance and fast speed. Many image encryption and hiding technologies have been proposed based on the optical technology. In conventional image encryption methods, the random phase masks are usually used as encryption keys to encode the images into random white noise distribution. However, this kind of methods requires interference technology such as holography to record complex amplitude. Furthermore, it is vulnerable to attack techniques. The image hiding methods employ the phase retrieve algorithm to encode the images into two or more phase masks. The hiding process is carried out within a computer and the images are reconstructed optically. But the iterative algorithms need a lot of time to hide the image into the masks. All methods mentioned above are based on the optical diffraction of the phase masks. In this presentation, we will propose a new optical image hiding method based on interference. The coherence lights pass through two phase masks and are combined by a beam splitter. Two beams interfere with each other and the desired image appears at the pre-designed plane. Two phase distribution masks are designed analytically; therefore, the hiding speed can be obviously improved. Simulation results are carried out to demonstrate the validity of the new proposed methods.

  20. Detailed analysis of latencies in image-based dynamic MLC tracking

    International Nuclear Information System (INIS)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J.

    2010-01-01

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals ΔT image from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For ΔT image =150 ms, the total time from image acquisition to completed MLC adjustment was 380±9 ms for MV and 420±12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms), MLC position

  1. Detailed analysis of latencies in image-based dynamic MLC tracking

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Oncology and Department of Medical Physics, Aarhus University Hospital, 8000 Aarhus (Denmark); Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Radiation Oncology, Asan Medical Center, Seoul 138-736 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2010-09-15

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals {Delta}T{sub image} from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For {Delta}T{sub image}=150 ms, the total time from image acquisition to completed MLC adjustment was 380{+-}9 ms for MV and 420{+-}12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms

  2. Hepatic trauma: CT findings and considerations based on our experience in emergency diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Luigia; Giovine, Sabrina; Guidi, Guido; Tortora, Giovanni; Cinque, Teresa; Romano, Stefania E-mail: stefromano@libero.it

    2004-04-01

    findings and peritoneal fluid evaluation may be used to make a first differentiation of severity of lesions, but haemodynamic parameters may help the clinician to prefer a conservative treatment. In emergency based hospitals and also in our experience, positive benefits spring from diagnostic accuracy and consequent correct therapeutic management.

  3. Images from the Mind: BCI image reconstruction based on Rapid Serial Visual Presentations of polygon primitives

    Directory of Open Access Journals (Sweden)

    Luís F Seoane

    2015-04-01

    Full Text Available We provide a proof of concept for an EEG-based reconstruction of a visual image which is on a user's mind. Our approach is based on the Rapid Serial Visual Presentation (RSVP of polygon primitives and Brain-Computer Interface (BCI technology. In an experimental setup, subjects were presented bursts of polygons: some of them contributed to building a target image (because they matched the shape and/or color of the target while some of them did not. The presentation of the contributing polygons triggered attention-related EEG patterns. These Event Related Potentials (ERPs could be determined using BCI classification and could be matched to the stimuli that elicited them. These stimuli (i.e. the ERP-correlated polygons were accumulated in the display until a satisfactory reconstruction of the target image was reached. As more polygons were accumulated, finer visual details were attained resulting in more challenging classification tasks. In our experiments, we observe an average classification accuracy of around 75%. An in-depth investigation suggests that many of the misclassifications were not misinterpretations of the BCI concerning the users' intent, but rather caused by ambiguous polygons that could contribute to reconstruct several different images. When we put our BCI-image reconstruction in perspective with other RSVP BCI paradigms, there is large room for improvement both in speed and accuracy. These results invite us to be optimistic. They open a plethora of possibilities to explore non-invasive BCIs for image reconstruction both in healthy and impaired subjects and, accordingly, suggest interesting recreational and clinical applications.

  4. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution.

    Science.gov (United States)

    Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian

    2017-03-01

    It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.

  5. Hierarchical layered and semantic-based image segmentation using ergodicity map

    Science.gov (United States)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects

  6. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  7. Privacy-Aware Image Encryption Based on Logistic Map and Data Hiding

    Science.gov (United States)

    Sun, Jianglin; Liao, Xiaofeng; Chen, Xin; Guo, Shangwei

    The increasing need for image communication and storage has created a great necessity for securely transforming and storing images over a network. Whereas traditional image encryption algorithms usually consider the security of the whole plain image, region of interest (ROI) encryption schemes, which are of great importance in practical applications, protect the privacy regions of plain images. Existing ROI encryption schemes usually adopt approximate techniques to detect the privacy region and measure the quality of encrypted images; however, their performance is usually inconsistent with a human visual system (HVS) and is sensitive to statistical attacks. In this paper, we propose a novel privacy-aware ROI image encryption (PRIE) scheme based on logistical mapping and data hiding. The proposed scheme utilizes salient object detection to automatically, adaptively and accurately detect the privacy region of a given plain image. After private pixels have been encrypted using chaotic cryptography, the significant bits are embedded into the nonprivacy region of the plain image using data hiding. Extensive experiments are conducted to illustrate the consistency between our automatic ROI detection and HVS. Our experimental results also demonstrate that the proposed scheme exhibits satisfactory security performance.

  8. Deformation Measurements of Gabion Walls Using Image Based Modeling

    Directory of Open Access Journals (Sweden)

    Marek Fraštia

    2014-06-01

    Full Text Available The image based modeling finds use in applications where it is necessary to reconstructthe 3D surface of the observed object with a high level of detail. Previous experiments showrelatively high variability of the results depending on the camera type used, the processingsoftware, or the process evaluation. The authors tested the method of SFM (Structure fromMotion to determine the stability of gabion walls. The results of photogrammetricmeasurements were compared to precise geodetic point measurements.

  9. A midas plugin to enable construction of reproducible web-based image processing pipelines.

    Science.gov (United States)

    Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek

    2013-01-01

    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  10. A Midas Plugin to Enable Construction of Reproducible Web-based Image Processing Pipelines

    Directory of Open Access Journals (Sweden)

    Michael eGrauer

    2013-12-01

    Full Text Available Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based UI, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  11. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  12. Half-Fan-Based Intensity-Weighted Region-of-Interest Imaging for Low-Dose Cone-Beam CT in Image-Guided Radiation Therapy.

    Science.gov (United States)

    Yoo, Boyeol; Son, Kihong; Pua, Rizza; Kim, Jinsung; Solodov, Alexander; Cho, Seungryong

    2016-10-01

    With the increased use of computed tomography (CT) in clinics, dose reduction is the most important feature people seek when considering new CT techniques or applications. We developed an intensity-weighted region-of-interest (IWROI) imaging method in an exact half-fan geometry to reduce the imaging radiation dose to patients in cone-beam CT (CBCT) for image-guided radiation therapy (IGRT). While dose reduction is highly desirable, preserving the high-quality images of the ROI is also important for target localization in IGRT. An intensity-weighting (IW) filter made of copper was mounted in place of a bowtie filter on the X-ray tube unit of an on-board imager (OBI) system such that the filter can substantially reduce radiation exposure to the outer ROI. In addition to mounting the IW filter, the lead-blade collimation of the OBI was adjusted to produce an exact half-fan scanning geometry for a further reduction of the radiation dose. The chord-based rebinned backprojection-filtration (BPF) algorithm in circular CBCT was implemented for image reconstruction, and a humanoid pelvis phantom was used for the IWROI imaging experiment. The IWROI image of the phantom was successfully reconstructed after beam-quality correction, and it was registered to the reference image within an acceptable level of tolerance. Dosimetric measurements revealed that the dose is reduced by approximately 61% in the inner ROI and by 73% in the outer ROI compared to the conventional bowtie filter-based half-fan scan. The IWROI method substantially reduces the imaging radiation dose and provides reconstructed images with an acceptable level of quality for patient setup and target localization. The proposed half-fan-based IWROI imaging technique can add a valuable option to CBCT in IGRT applications.

  13. Computer assisted treatments for image pattern data of laser plasma experiments

    International Nuclear Information System (INIS)

    Yaoita, Akira; Matsushima, Isao

    1987-01-01

    An image data processing system for laser-plasma experiments has been constructed. These image data are two dimensional images taken by X-ray, UV, infrared and visible light television cameras and also taken by streak cameras. They are digitized by frame memories. The digitized image data are stored in disk memories with the aid of a microcomputer. The data are processed by a host computer and stored in the files of the host computer and on magnetic tapes. In this paper, the over view of the image data processing system and some software for data handling in the host computer are reported. (author)

  14. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  15. Speckle imaging algorithms for planetary imaging

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, E. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  16. Photonics-Based Microwave Image-Reject Mixer

    Directory of Open Access Journals (Sweden)

    Dan Zhu

    2018-03-01

    Full Text Available Recent developments in photonics-based microwave image-reject mixers (IRMs are reviewed with an emphasis on the pre-filtering method, which applies an optical or electrical filter to remove the undesired image, and the phase cancellation method, which is realized by introducing an additional phase to the converted image and cancelling it through coherent combination without phase shift. Applications of photonics-based microwave IRM in electronic warfare, radar systems and satellite payloads are described. The inherent challenges of implementing photonics-based microwave IRM to meet specific requirements of the radio frequency (RF system are discussed. Developmental trends of the photonics-based microwave IRM are also discussed.

  17. SAR Image Classification Based on Its Texture Features

    Institute of Scientific and Technical Information of China (English)

    LI Pingxiang; FANG Shenghui

    2003-01-01

    SAR images not only have the characteristics of all-ay, all-eather, but also provide object information which is different from visible and infrared sensors. However, SAR images have some faults, such as more speckles and fewer bands. The authors conducted the experiments of texture statistics analysis on SAR image features in order to improve the accuracy of SAR image interpretation.It is found that the texture analysis is an effective method for improving the accuracy of the SAR image interpretation.

  18. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    Science.gov (United States)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  19. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  20. MR Imaging with Gadolinium-DTPA in skull-base tumors

    International Nuclear Information System (INIS)

    Bartolozzi, C.; Olmastroni, M.; Dal Pozzo, G.; Petacchi, D.

    1988-01-01

    Twenty-five patients were investigated by MR imaging in order to evaluate the diagnostic value of Gadolinium (Gd)-DTPA in skull-base tumors. The patients were studied with standard acquisition techniques (T1, mixed and T2-weighted images) without contrast medium. The images obtained after intravenous injection of Gd-DTPA. The contrastographic results in the different types of acquisition were evaluated. Thanks to the extra-ordinary increase in contrast resolution it provides, Gd-DTPA allowed the precise evaluation of the lesion and of its perfect spatial definition in all cases. Our experience demonstrated that Gd-DTPA considerably increases the sensitivity of the technique in this anatomical region. On the contrary, as regards the nature of the lesion, the signal did not significantly very after the iv injection of Gd-DTPA in the various kinds of lesion. In addition to the important diagnostic advantages of Gd-DTPA, its excellent tollerability and the absence of side-effects must be stressed

  1. TIE POINTS EXTRACTION FOR SAR IMAGES BASED ON DIFFERENTIAL CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    X. Xiong

    2018-04-01

    Full Text Available Automatically extracting tie points (TPs on large-size synthetic aperture radar (SAR images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  2. Understanding God images and God concepts : Towards a pastoral hermeneutics of the God attachment experience

    NARCIS (Netherlands)

    Counted, Agina Victor

    2015-01-01

    The author looks at the God image experience as an attachment relationship experience with God. Hence, arguing that the God image experience is borne originally out of a parent–child attachment contagion, in such a way that God is often represented in either secure or insecure attachment patterns.

  3. Visual wetness perception based on image color statistics.

    Science.gov (United States)

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  4. General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition.

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun

    2017-06-01

    Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.

  5. PIXEL PATTERN BASED STEGANOGRAPHY ON IMAGES

    Directory of Open Access Journals (Sweden)

    R. Rejani

    2015-02-01

    Full Text Available One of the drawback of most of the existing steganography methods is that it alters the bits used for storing color information. Some of the examples include LSB or MSB based steganography. There are also various existing methods like Dynamic RGB Intensity Based Steganography Scheme, Secure RGB Image Steganography from Pixel Indicator to Triple Algorithm etc that can be used to find out the steganography method used and break it. Another drawback of the existing methods is that it adds noise to the image which makes the image look dull or grainy making it suspicious for a person about existence of a hidden message within the image. To overcome these shortcomings we have come up with a pixel pattern based steganography which involved hiding the message within in image by using the existing RGB values whenever possible at pixel level or with minimum changes. Along with the image a key will also be used to decrypt the message stored at pixel levels. For further protection, both the message stored as well as the key file will be in encrypted format which can have same or different keys or decryption. Hence we call it as a RGB pixel pattern based steganography.

  6. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    Science.gov (United States)

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  7. Patient positioning method based on binary image correlation between two edge images for proton-beam radiation therapy

    International Nuclear Information System (INIS)

    Sawada, Akira; Yoda, Kiyoshi; Numano, Masumi; Futami, Yasuyuki; Yamashita, Haruo; Murayama, Shigeyuki; Tsugami, Hironobu

    2005-01-01

    A new technique based on normalized binary image correlation between two edge images has been proposed for positioning proton-beam radiotherapy patients. A Canny edge detector was used to extract two edge images from a reference x-ray image and a test x-ray image of a patient before positioning. While translating and rotating the edged test image, the absolute value of the normalized binary image correlation between the two edge images is iteratively maximized. Each time before rotation, dilation is applied to the edged test image to avoid a steep reduction of the image correlation. To evaluate robustness of the proposed method, a simulation has been carried out using 240 simulated edged head front-view images extracted from a reference image by varying parameters of the Canny algorithm with a given range of rotation angles and translation amounts in x and y directions. It was shown that resulting registration errors have an accuracy of one pixel in x and y directions and zero degrees in rotation, even when the number of edge pixels significantly differs between the edged reference image and the edged simulation image. Subsequently, positioning experiments using several sets of head, lung, and hip data have been performed. We have observed that the differences of translation and rotation between manual positioning and the proposed method were within one pixel in translation and one degree in rotation. From the results of the validation study, it can be concluded that a significant reduction in workload for the physicians and technicians can be achieved with this method

  8. Pc-Based Floating Point Imaging Workstation

    Science.gov (United States)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  9. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  10. Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.

  11. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions.

    Science.gov (United States)

    Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng

    2015-07-28

    Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed

  12. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    Science.gov (United States)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  13. Results from neutron imaging of ICF experiments at NIF

    Science.gov (United States)

    Merrill, F. E.; Danly, C. R.; Fittinghoff, D. N.; Grim, G. P.; Guler, N.; Volegov, P. L.; Wilde, C. H.

    2016-03-01

    In 2011 a neutron imaging diagnostic was commissioned at the National Ignition Facility (NIF). This new system has been used to collect neutron images to measure the size and shape of the burning DT plasma and the surrounding fuel assembly. The imaging technique uses a pinhole neutron aperture placed between the neutron source and a neutron detector. The detection system measures the two-dimensional distribution of neutrons passing through the pinhole. This diagnostic collects two images at two times. The long flight path for this diagnostic, 28 m, results in a chromatic separation of the neutrons, allowing the independently timed images to measure the source distribution for two neutron energies. Typically one image measures the distribution of the 14 MeV neutrons, and the other image measures the distribution of the 6-12 MeV neutrons. The combination of these two images has provided data on the size and shape of the burning plasma within the compressed capsule, as well as a measure of the quantity and spatial distribution of the cold fuel surrounding this core. Images have been collected for the majority of the experiments performed as part of the ignition campaign. Results from this data have been used to estimate a burn-averaged fuel assembly as well as providing performance metrics to gauge progress towards ignition. This data set and our interpretation are presented.

  14. Image quality of the cat eye measured during retinal ganglion cell experiments.

    Science.gov (United States)

    Bonds, A B; Enroth-Cugell, C; Pinto, L H

    1972-01-01

    1. The modulation transfer function (MTF) of the dioptrics of fifteen cat eyes was determined. The aerial image, formed by the eye of a standard object (a 0.5-1.0 degrees annulus), was photographed. The transmission of the film negative was measured with a scanning microdensitometer to yield the light distribution within the aerial image. Correcting for the double passage, this experimentally determined light distribution and the known object light distribution were used to obtain the MTF, applying Fourier methods. Each MTF was used to calculate the light distribution within the retinal image of stimuli of various geometry used in experiments on retinal ganglion cells in the same eye.2. When the eye was equipped with an artificial pupil of the same size as that used in the neurophysiological experiments (4.0-4.8 mm diam.) the MTF had fallen to 0.5 at 2.43 c/deg. When the pupil was removed the MTF had fallen to 0.5 at a much lower spatial frequency (1.0 c/deg). This shows that even when one uses an artificial pupil too large to provide optimal image quality there is a vast improvement over using no pupil.3. These image quality measurements were prompted by the need to know the actual stimulus image in experiments on the functional organization of the receptive field, a need exemplified in this paper by a few specific physiological results. The full neurophysiological results appear in the next two papers.

  15. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  16. Image-Based Dietary Assessment Ability of Dietetics Students and Interns

    Directory of Open Access Journals (Sweden)

    Erica Howes

    2017-02-01

    Full Text Available Image-based dietary assessment (IBDA may improve the accuracy of dietary assessments, but no formalized training currently exists for skills relating to IBDA. This study investigated nutrition and dietetics students’ and interns’ IBDA abilities, the training and experience factors that may contribute to food identification and quantification accuracy, and the perceived challenges to performing IBDA. An online survey containing images of known foods and serving sizes representing common American foods was used to assess the ability to identify foods and serving sizes. Nutrition and dietetics students and interns from the United States and Australia (n = 114 accurately identified foods 79.5% of the time. Quantification accuracy was lower, with only 38% of estimates within ±10% of the actual weight. Foods of amorphous shape or higher energy density had the highest percent error. Students expressed general difficulty with perceiving serving sizes, making IBDA food quantification more difficult. Experience cooking at home from a recipe, frequent measuring of portions, and having a food preparation or cooking laboratory class were associated with enhanced accuracy in IBDA. Future training of dietetics students should incorporate more food-based serving size training to improve quantification accuracy while performing IBDA, while advances in IBDA technology are also needed.

  17. ImageSURF: An ImageJ Plugin for Batch Pixel-Based Image Segmentation Using Random Forests

    Directory of Open Access Journals (Sweden)

    Aidan O'Mara

    2017-11-01

    Full Text Available Image segmentation is a necessary step in automated quantitative imaging. ImageSURF is a macro-compatible ImageJ2/FIJI plugin for pixel-based image segmentation that considers a range of image derivatives to train pixel classifiers which are then applied to image sets of any size to produce segmentations without bias in a consistent, transparent and reproducible manner. The plugin is available from ImageJ update site http://sites.imagej.net/ImageSURF/ and source code from https://github.com/omaraa/ImageSURF. Funding statement: This research was supported by an Australian Government Research Training Program Scholarship.

  18. Slice image pretreatment for cone-beam computed tomography based on adaptive filter

    International Nuclear Information System (INIS)

    Huang Kuidong; Zhang Dinghua; Jin Yanfang

    2009-01-01

    According to the noise properties and the serial slice image characteristics in Cone-Beam Computed Tomography (CBCT) system, a slice image pretreatment for CBCT based on adaptive filter was proposed. The judging criterion for the noise is established firstly. All pixels are classified into two classes: adaptive center weighted modified trimmed mean (ACWMTM) filter is used for the pixels corrupted by Gauss noise and adaptive median (AM) filter is used for the pixels corrupted by impulse noise. In ACWMTM filtering algorithm, the estimated Gauss noise standard deviation in the current slice image with offset window is replaced by the estimated standard deviation in the adjacent slice image to the current with the corresponding window, so the filtering accuracy of the serial images is improved. The pretreatment experiment on CBCT slice images of wax model of hollow turbine blade shows that the method makes a good performance both on eliminating noises and on protecting details. (authors)

  19. An overview of medical image data base

    International Nuclear Information System (INIS)

    Nishihara, Eitaro

    1992-01-01

    Recently, the systematization using computers in medical institutions has advanced, and the introduction of hospital information system has been almost completed in the large hospitals with more than 500 beds. But the objects of the management of the hospital information system are text information, and do not include the management of images of enormous quantity. By the progress of image diagnostic equipment, the digitization of medical images has advanced, but the management of images in hospitals does not utilize the merits of digital images. For the purpose of solving these problems, the picture archiving and communication system (PACS) was proposed about ten years ago, which makes medical images into a data base, and enables the on-line access to images from various places in hospitals. The studies have been continued to realize it. The features of medical image data, the present status of utilizing medical image data, the outline of the PACS, the image data base for the PACS, the problems in the realization of the data base and the technical trend, and the state of actual construction of the PACS are reported. (K.I.)

  20. Liposomes - experiment of magnetic resonance imaging application

    International Nuclear Information System (INIS)

    Mathieu, S.

    1987-01-01

    Most pharmaceutical research effort with liposomes has been involved with the investigation of their use as drug carriers to particular target organs. Recently there has been a growing interest in liposomes not only as carrier of drugs but as a tool for the introduction of various substances into the human body. In this study, liposome delivery of nitroxyl radicals as NMR contrast agent for improved tissue imaging is experimented in rats [fr

  1. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  2. Characterization of porcine eyes based on autofluorescence lifetime imaging

    Science.gov (United States)

    Batista, Ana; Breunig, Hans Georg; Uchugonova, Aisada; Morgado, António Miguel; König, Karsten

    2015-03-01

    Multiphoton microscopy is a non-invasive imaging technique with ideal characteristics for biological applications. In this study, we propose to characterize three major structures of the porcine eye, the cornea, crystalline lens, and retina using two-photon excitation fluorescence lifetime imaging microscopy (2PE-FLIM). Samples were imaged using a laser-scanning microscope, consisting of a broadband sub-15 femtosecond (fs) near-infrared laser. Signal detection was performed using a 16-channel photomultiplier tube (PMT) detector (PML-16PMT). Therefore, spectral analysis of the fluorescence lifetime data was possible. To ensure a correct spectral analysis of the autofluorescence lifetime data, the spectra of the individual endogenous fluorophores were acquired with the 16-channel PMT and with a spectrometer. All experiments were performed within 12h of the porcine eye enucleation. We were able to image the cornea, crystalline lens, and retina at multiple depths. Discrimination of each structure based on their autofluorescence intensity and lifetimes was possible. Furthermore, discrimination between different layers of the same structure was also possible. To the best of our knowledge, this was the first time that 2PE-FLIM was used for porcine lens imaging and layer discrimination. With this study we further demonstrated the feasibility of 2PE-FLIM to image and differentiate three of the main components of the eye and its potential as an ophthalmologic technique.

  3. A fast color image enhancement algorithm based on Max Intensity Channel

    Science.gov (United States)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  4. Implicit prosody mining based on the human eye image capture technology

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.

  5. A High Precision Laser-Based Autofocus Method Using Biased Image Plane for Microscopy

    Directory of Open Access Journals (Sweden)

    Chao-Chen Gu

    2018-01-01

    Full Text Available This study designs and accomplishes a high precision and robust laser-based autofocusing system, in which a biased image plane is applied. In accordance to the designed optics, a cluster-based circle fitting algorithm is proposed to calculate the radius of the detecting spot from the reflected laser beam as an essential factor to obtain the defocus value. The experiment conduct on the experiment device achieved novel performance of high precision and robustness. Furthermore, the low demand of assembly accuracy makes the proposed method a low-cost and realizable solution for autofocusing technique.

  6. No-reference image quality assessment based on statistics of convolution feature maps

    Science.gov (United States)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  7. Bone surface enhancement in ultrasound images using a new Doppler-based acquisition/processing method

    Science.gov (United States)

    Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella

    2018-01-01

    Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat ‘brighter’ than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.

  8. Bone surface enhancement in ultrasound images using a new Doppler-based acquisition/processing method.

    Science.gov (United States)

    Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella

    2018-01-17

    Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat 'brighter' than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.

  9. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  10. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  11. Object recognition based on Google's reverse image search and image similarity

    Science.gov (United States)

    Horváth, András.

    2015-12-01

    Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.

  12. Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.

    Science.gov (United States)

    Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso

    2018-07-01

    There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

  13. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    Science.gov (United States)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  14. Confocal pore size measurement based on super-resolution image restoration.

    Science.gov (United States)

    Liu, Dali; Wang, Yun; Qiu, Lirong; Mao, Xinyue; Zhao, Weiqian

    2014-09-01

    A confocal pore size measurement based on super-resolution image restoration is proposed to obtain a fast and accurate measurement for submicrometer pore size of nuclear track-etched membranes (NTEMs). This method facilitates the online inspection of the pore size evolution during etching. Combining confocal microscopy with super-resolution image restoration significantly improves the lateral resolution of the NTEM image, yields a reasonable circle edge-setting criterion of 0.2408, and achieves precise pore edge detection. Theoretical analysis shows that the minimum measuring diameter can reach 0.19 μm, and the root mean square of the residuals is only 1.4 nm. Edge response simulation and experiment reveal that the edge response of the proposed method is better than 80 nm. The NTEM pore size measurement results obtained by the proposed method agree well with that obtained by scanning electron microscopy.

  15. GOTCHA experience report: three-dimensional SAR imaging with complete circular apertures

    Science.gov (United States)

    Ertin, Emre; Austin, Christian D.; Sharma, Samir; Moses, Randolph L.; Potter, Lee C.

    2007-04-01

    We study circular synthetic aperture radar (CSAR) systems collecting radar backscatter measurements over a complete circular aperture of 360 degrees. This study is motivated by the GOTCHA CSAR data collection experiment conducted by the Air Force Research Laboratory (AFRL). Circular SAR provides wide-angle information about the anisotropic reflectivity of the scattering centers in the scene, and also provides three dimensional information about the location of the scattering centers due to a non planar collection geometry. Three dimensional imaging results with single pass circular SAR data reveals that the 3D resolution of the system is poor due to the limited persistence of the reflectors in the scene. We present results on polarimetric processing of CSAR data and illustrate reasoning of three dimensional shape from multi-view layover using prior information about target scattering mechanisms. Next, we discuss processing of multipass (CSAR) data and present volumetric imaging results with IFSAR and three dimensional backprojection techniques on the GOTCHA data set. We observe that the volumetric imaging with GOTCHA data is degraded by aliasing and high sidelobes due to nonlinear flightpaths and sparse and unequal sampling in elevation. We conclude with a model based technique that resolves target features and enhances the volumetric imagery by extrapolating the phase history data using the estimated model.

  16. Impurity analysis of NSTX using a transmission grating-based imaging spectrometer

    International Nuclear Information System (INIS)

    Kumar, Deepak; Finkenthal, Michael; Stutman, Dan; Clayton, Daniel J; Tritz, Kevin; Bell, Ronald E; Diallo, Ahmed; LeBlanc, Ben P; Podesta, Mario

    2012-01-01

    A transmission grating-based imaging spectrometer has recently been installed and operated on the National Spherical Torus Experiment (NSTX) at PPPL. This paper describes the spectral and spatial characteristics of impurity emission under different operating conditions of the experiment—neutral beam heated, ohmic heated and RF heated plasma. A typical spectrum from each scenario is analyzed to provide quantitative estimates of impurity fractions in the plasma. (paper)

  17. Determination of Surface Tension of Surfactant Solutions through Capillary Rise Measurements: An Image-Processing Undergraduate Laboratory Experiment

    Science.gov (United States)

    Huck-Iriart, Cristia´n; De-Candia, Ariel; Rodriguez, Javier; Rinaldi, Carlos

    2016-01-01

    In this work, we described an image processing procedure for the measurement of surface tension of the air-liquid interface using isothermal capillary action. The experiment, designed for an undergraduate course, is based on the analysis of a series of solutions with diverse surfactant concentrations at different ionic strengths. The objective of…

  18. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    Science.gov (United States)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  19. Image Re-Ranking Based on Topic Diversity.

    Science.gov (United States)

    Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng

    2017-08-01

    Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.

  20. Artistic image analysis using graph-based learning approaches.

    Science.gov (United States)

    Carneiro, Gustavo

    2013-08-01

    We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.

  1. Understanding images using knowledge based approach

    International Nuclear Information System (INIS)

    Tascini, G.

    1985-01-01

    This paper presents an approach to image understanding focusing on low level image processing and proposes a rule-based approach as part of larger knowledge-based system. The general system has a yerarchical structure that comprises several knowledge-based layers. The main idea is to confine at the lower level the domain independent knowledge and to reserve the higher levels for the domain dependent knowledge, that is for the interpretation

  2. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  3. Improved Ordinary Measure and Image Entropy Theory based intelligent Copy Detection Method

    Directory of Open Access Journals (Sweden)

    Dengpan Ye

    2011-10-01

    Full Text Available Nowadays, more and more multimedia websites appear in social network. It brings some security problems, such as privacy, piracy, disclosure of sensitive contents and so on. Aiming at copyright protection, the copy detection technology of multimedia contents becomes a hot topic. In our previous work, a new computer-based copyright control system used to detect the media has been proposed. Based on this system, this paper proposes an improved media feature matching measure and an entropy based copy detection method. The Levenshtein Distance was used to enhance the matching degree when using for feature matching measure in copy detection. For entropy based copy detection, we make a fusion of the two features of entropy matrix of the entropy feature we extracted. Firstly,we extract the entropy matrix of the image and normalize it. Then, we make a fusion of the eigenvalue feature and the transfer matrix feature of the entropy matrix. The fused features will be used for image copy detection. The experiments show that compared to use these two kinds of features for image detection singly, using feature fusion matching method is apparent robustness and effectiveness. The fused feature has a high detection for copy images which have been received some attacks such as noise, compression, zoom, rotation and so on. Comparing with referred methods, the method proposed is more intelligent and can be achieved good performance.

  4. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    Science.gov (United States)

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  5. Real-time single image dehazing based on dark channel prior theory and guided filtering

    Science.gov (United States)

    Zhang, Zan

    2017-10-01

    Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.

  6. A new method for information retrieval in two-dimensional grating-based X-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wang Zhi-Li; Gao Kun; Chen Jian; Ge Xin; Tian Yang-Chao; Wu Zi-Yu; Zhu Pei-Ping

    2012-01-01

    Grating-based X-ray phase contrast imaging has been demonstrated to be an extremely powerful phase-sensitive imaging technique. By using two-dimensional (2D) gratings, the observable contrast is extended to two refraction directions. Recently, we have developed a novel reverse-projection (RP) method, which is capable of retrieving the object information efficiently with one-dimensional (1D) grating-based phase contrast imaging. In this contribution, we present its extension to the 2D grating-based X-ray phase contrast imaging, named the two-dimensional reverse-projection (2D-RP) method, for information retrieval. The method takes into account the nonlinear contributions of two refraction directions and allows the retrieval of the absorption, the horizontal and the vertical refraction images. The obtained information can be used for the reconstruction of the three-dimensional phase gradient field, and for an improved phase map retrieval and reconstruction. Numerical experiments are carried out, and the results confirm the validity of the 2D-RP method

  7. An Integrative Object-Based Image Analysis Workflow for Uav Images

    Science.gov (United States)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  8. AN INTEGRATIVE OBJECT-BASED IMAGE ANALYSIS WORKFLOW FOR UAV IMAGES

    Directory of Open Access Journals (Sweden)

    H. Yu

    2016-06-01

    Full Text Available In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA. More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC. Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya’an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  9. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  10. Hyperspectral Imaging of Forest Resources: The Malaysian Experience

    Science.gov (United States)

    Mohd Hasmadi, I.; Kamaruzaman, J.

    2008-08-01

    Remote sensing using satellite and aircraft images are well established technology. Remote sensing application of hyperspectral imaging, however, is relatively new to Malaysian forestry. Through a wide range of wavelengths hyperspectral data are precisely capable to capture narrow bands of spectra. Airborne sensors typically offer greatly enhanced spatial and spectral resolution over their satellite counterparts, and able to control experimental design closely during image acquisition. The first study using hyperspectral imaging for forest inventory in Malaysia were conducted by Professor Hj. Kamaruzaman from the Faculty of Forestry, Universiti Putra Malaysia in 2002 using the AISA sensor manufactured by Specim Ltd, Finland. The main objective has been to develop methods that are directly suited for practical tropical forestry application at the high level of accuracy. Forest inventory and tree classification including development of single spectral signatures have been the most important interest at the current practices. Experiences from the studies showed that retrieval of timber volume and tree discrimination using this system is well and some or rather is better than other remote sensing methods. This article reviews the research and application of airborne hyperspectral remote sensing for forest survey and assessment in Malaysia.

  11. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.

    Science.gov (United States)

    Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek

    2017-08-24

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.

  12. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  13. Magnetic resonance spectroscopy imaging in the diagnosis of prostate cancer: initial experience

    International Nuclear Information System (INIS)

    Melo, Homero Jose de Farias e; Abdala, Nitamar; Goldman, Suzan Menasce; Szejnfeld, Jacob

    2009-01-01

    Objective: to report an experiment involving the introduction of a protocol utilizing commercially available three-dimensional 1H magnetic resonance spectroscopy imaging (3D 1H MRSI) method in patients diagnosed with prostatic tumors under suspicion of neoplasm. Materials and methods: forty-one patients in the age range between 51 and 80 years (mean, 67 years) were prospectively evaluated. The patients were divided into two groups: patients with one or more biopsies negative for cancer and high specific-prostatic antigen levels (group A), and patients with cancer confirmed by biopsy (group B). The determination of the target area (group A) or the known cancer extent (group B) was based on magnetic resonance imaging and MRSI studies. Results: the specificity of MRSI in the diagnosis of prostate cancer was lower than the specificity reported in the literature (about 47%). On the other hand, for tumor staging, it corresponded to the specificity reported in the literature. Conclusion: the introduction and standardization of 3D 1H MRSI has allowed the obtention of a presumable diagnosis of prostate cancer, by a combined analysis of magnetic resonance imaging and metabolic data from 3D 1H MRSI. (author)

  14. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  15. Radiotherapy supporting system based on the image database using IS&C magneto-optical disk

    Science.gov (United States)

    Ando, Yutaka; Tsukamoto, Nobuhiro; Kunieda, Etsuo; Kubo, Atsushi

    1994-05-01

    Since radiation oncologists make the treatment plan by prior experience, information about previous cases is helpful in planning the radiation treatment. We have developed an supporting system for the radiation therapy. The case-based reasoning method was implemented in order to search old treatments and images of past cases. This system evaluates similarities between the current case and all stored cases (case base). The portal images of the similar cases can be retrieved for reference images, as well as treatment records which show examples of the radiation treatment. By this system radiotherapists can easily make suitable plans of the radiation therapy. This system is useful to prevent inaccurate plannings due to preconceptions and/or lack of knowledge. Images were stored into magneto-optical disks and the demographic data is recorded to the hard disk which is equipped in the personal computer. Images can be displayed quickly on the radiotherapist's demands. The radiation oncologist can refer past cases which are recorded in the case base and decide the radiation treatment of the current case. The file and data format of magneto-optical disk is the IS&C format. This format provides the interchangeability and reproducibility of the medical information which includes images and other demographic data.

  16. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  17. Investigating the effects of brand experience, trust, perception image and satisfaction on creating customer loyalty: A case study of laptop market

    Directory of Open Access Journals (Sweden)

    Samaneh Khalili

    2013-09-01

    Full Text Available This paper presents an empirical investigation on the effects of brand experience, trust, perception image and brand satisfaction on creating customer loyalty on Iranian laptop market. The proposed study of this paper prepares a questionnaire in Likert scale and distributes it among some university students in province of Qazvin, Iran. The implementation of structural equation modeling for the proposed study of this paper has been accomplished based on LISREL software. Cronbach alphas for experience, satisfaction, loyalty, trust and perception from brand are calculated as 0.71, 0.83, 0.76, 0.69 and 0.86, respectively and they validate the overall questionnaire. The results of the survey on testing various hypotheses indicate that brand experience has positive and meaningful relationship with brand satisfaction, trust, perception image and loyalty. In addition, satisfaction, perception image and trust have positive meaningful with brand loyalty.

  18. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Trojachanets, Katarina

    2010-01-01

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  19. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  20. Medical Image Tamper Detection Based on Passive Image Authentication.

    Science.gov (United States)

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  1. Strong reflector-based beamforming in ultrasound medical imaging.

    Science.gov (United States)

    Szasz, Teodora; Basarab, Adrian; Kouamé, Denis

    2016-03-01

    This paper investigates the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well known Bayesian Information Criteria used in statistical modeling. Moreover, they allow a parametric selection of the level of speckle in the final beamformed image. These methods are applied on simulated data and on recorded experimental data. Their performance is evaluated considering the standard image quality metrics: contrast ratio (CR), contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR). A comparison is made with the classical delay-and-sum and minimum variance beamforming methods to confirm the ability of the proposed methods to precisely detect the number and the position of the strong reflectors in a sparse medium and to accurately reduce the speckle and highly enhance the contrast in a non-sparse medium. We confirm that our methods improve the contrast of the final image for both simulated and experimental data. In all experiments, the proposed approaches tend to preserve the speckle, which can be of major interest in clinical examinations, as it can contain useful information. In sparse mediums we achieve a highly improvement in contrast compared with the classical methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. GPR Imaging for Deeply Buried Objects: A Comparative Study Based on FDTD Models and Field Experiments

    Science.gov (United States)

    Tilley, roger; Dowla, Farid; Nekoogar, Faranak; Sadjadpour, Hamid

    2012-01-01

    Conventional use of Ground Penetrating Radar (GPR) is hampered by variations in background environmental conditions, such as water content in soil, resulting in poor repeatability of results over long periods of time when the radar pulse characteristics are kept the same. Target objects types might include voids, tunnels, unexploded ordinance, etc. The long-term objective of this work is to develop methods that would extend the use of GPR under various environmental and soil conditions provided an optimal set of radar parameters (such as frequency, bandwidth, and sensor configuration) are adaptively employed based on the ground conditions. Towards that objective, developing Finite Difference Time Domain (FDTD) GPR models, verified by experimental results, would allow us to develop analytical and experimental techniques to control radar parameters to obtain consistent GPR images with changing ground conditions. Reported here is an attempt at developing 20 and 3D FDTD models of buried targets verified by two different radar systems capable of operating over different soil conditions. Experimental radar data employed were from a custom designed high-frequency (200 MHz) multi-static sensor platform capable of producing 3-D images, and longer wavelength (25 MHz) COTS radar (Pulse EKKO 100) capable of producing 2-D images. Our results indicate different types of radar can produce consistent images.

  3. Molecular–Genetic Imaging: A Nuclear Medicine–Based Perspective

    Directory of Open Access Journals (Sweden)

    Ronald G. Blasberg

    2002-07-01

    Full Text Available Molecular imaging is a relatively new discipline, which developed over the past decade, initially driven by in situ reporter imaging technology. Noninvasive in vivo molecular–genetic imaging developed more recently and is based on nuclear (positron emission tomography [PET], gamma camera, autoradiography imaging as well as magnetic resonance (MR and in vivo optical imaging. Molecular–genetic imaging has its roots in both molecular biology and cell biology, as well as in new imaging technologies. The focus of this presentation will be nuclear-based molecular–genetic imaging, but it will comment on the value and utility of combining different imaging modalities. Nuclear-based molecular imaging can be viewed in terms of three different imaging strategies: (1 “indirect” reporter gene imaging; (2 “direct” imaging of endogenous molecules; or (3 “surrogate” or “bio-marker” imaging. Examples of each imaging strategy will be presented and discussed. The rapid growth of in vivo molecular imaging is due to the established base of in vivo imaging technologies, the established programs in molecular and cell biology, and the convergence of these disciplines. The development of versatile and sensitive assays that do not require tissue samples will be of considerable value for monitoring molecular–genetic and cellular processes in animal models of human disease, as well as for studies in human subjects in the future. Noninvasive imaging of molecular–genetic and cellular processes will complement established ex vivo molecular–biological assays that require tissue sampling, and will provide a spatial as well as a temporal dimension to our understanding of various diseases and disease processes.

  4. Edge enhancement and noise suppression for infrared image based on feature analysis

    Science.gov (United States)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  5. A debugging method of the Quadrotor UAV based on infrared thermal imaging

    Science.gov (United States)

    Cui, Guangjie; Hao, Qian; Yang, Jianguo; Chen, Lizhi; Hu, Hongkang; Zhang, Lijun

    2018-01-01

    High-performance UAV has been popular and in great need in recent years. The paper introduces a new method in debugging Quadrotor UAVs. Based on the infrared thermal technology and heat transfer theory, a UAV is under debugging above a hot-wire grid which is composed of 14 heated nichrome wires. And the air flow propelled by the rotating rotors has an influence on the temperature distribution of the hot-wire grid. An infrared thermal imager below observes the distribution and gets thermal images of the hot-wire grid. With the assistance of mathematic model and some experiments, the paper discusses the relationship between thermal images and the speed of rotors. By means of getting debugged UAVs into test, the standard information and thermal images can be acquired. The paper demonstrates that comparing to the standard thermal images, a UAV being debugging in the same test can draw some critical data directly or after interpolation. The results are shown in the paper and the advantages are discussed.

  6. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    Science.gov (United States)

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  7. A Feature Subtraction Method for Image Based Kinship Verification under Uncontrolled Environments

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    The most fundamental problem of local feature based kinship verification methods is that a local feature can capture the variations of environmental conditions and the differences between two persons having a kin relation, which can significantly decrease the performance. To address this problem...... the feature distance between face image pairs with kinship and maximize the distance between non-kinship pairs. Based on the subtracted feature, the verification is realized through a simple Gaussian based distance comparison method. Experiments on two public databases show that the feature subtraction method...

  8. Text extraction method for historical Tibetan document images based on block projections

    Science.gov (United States)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  9. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  10. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  11. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  12. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  13. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    International Nuclear Information System (INIS)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  14. Professionals' experiences of imaging in the radiography process – A phenomenological approach

    International Nuclear Information System (INIS)

    Lundvall, Lise-Lott; Dahlgren, Madeleine Abrandt; Wirell, Staffan

    2014-01-01

    Introduction: Previous studies on radiographers' professional work have shown that this practice covers both technology and patient care. How these two competence areas blend together in practice needs to be investigated. The professionals' experiences of their work have not been studied in depth, and there is a need to focus on their experiences of the main features of their practice. The aim: To explore, from the perspective of the radiographer, the general tasks and responsibilities of their work. Method: Data were generated through a combination of open interviews with radiographers and observations of their work with Computer Tomography (CT) and Magnetic Resonance Imaging (MRI). The interviews and observations were analysed using an interpretative phenomenological method. Result: Radiographers' professional work with diagnostic imaging, in a Swedish context, can be viewed as a problem-solving process involving judgments and responsibility for obtaining images that can be used for diagnosis. The examination process comprises three phases; planning, producing the images, and evaluation. In the first phase the radiographer makes judgments on adapting the method to the individual patient, and the second phase involves responsibilities and practical skills for image production. In the third phase, the quality of the images is judged in relation to the actual patient and the imaging process itself. Conclusions: Radiographers consider that the main features of their professional work are patient safety aspects and their knowledge and skills regarding how to produce images of optimal quality, in the actual circumstances of each examination

  15. Restoration of motion-blurred image based on border deformation detection: a traffic sign restoration model.

    Directory of Open Access Journals (Sweden)

    Yiliang Zeng

    Full Text Available Due to the rapid development of motor vehicle Driver Assistance Systems (DAS, the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.

  16. Design and Implementation of a FPGA and DSP Based MIMO Radar Imaging System

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2015-06-01

    Full Text Available The work presented in this paper is aimed at the implementation of a real-time multiple-input multiple-output (MIMO imaging radar used for area surveillance. In this radar, the equivalent virtual array method and time-division technique are applied to make 16 virtual elements synthesized from the MIMO antenna array. The chirp signal generater is based on a combination of direct digital synthesizer (DDS and phase locked loop (PLL. A signal conditioning circuit is used to deal with the coupling effect within the array. The signal processing platform is based on an efficient field programmable gates array (FPGA and digital signal processor (DSP pipeline where a robust beamforming imaging algorithm is running on. The radar system was evaluated through a real field experiment. Imaging capability and real-time performance shown in the results demonstrate the practical feasibility of the implementation.

  17. D Reconstruction from Uav-Based Hyperspectral Images

    Science.gov (United States)

    Liu, L.; Xu, L.; Peng, J.

    2018-04-01

    Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.

  18. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    Science.gov (United States)

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  19. Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jeongyeup Paek

    2014-08-01

    Full Text Available This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet’s built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  20. ON THE PATH OF THE FILM: Image as Experience and Experience as Image

    Directory of Open Access Journals (Sweden)

    André Reyes Novaes

    2013-12-01

    Full Text Available This article aims to show the relationships between the description of images and the conduction of fieldworks considering a practical pedagogical experience performed with students from the Colégio de Aplicação da UFRJ. Taking as a starting point the research conduct to shoot a documentary called Vulgo Sacopã, which portrayed the transformation of the landscape of a hill situated at Lagoa Rodrigo de Freitas in the city of Rio de Janeiro, the pedagogical practice was divided in two stages. First, students were presented to a series of historical images, seeking to stimulate the reading of the landscape as a "text" and identify variations in its interpretation. Later, we went to the fieldwork in order to perform the landscape and meet the main character of the film. By walking in the paths of the film, it was possible to identify a series of exchanges that can problematize simplistic divisions between the landscape “in visu", shown in the classroom, and the landscape "in situ" observed in the field. RESUMO: O presente artigo tem como objetivo mostrar as relações entre descrição de imagens e trabalho de campo por meio de uma experiência pedagógica prática realizada com alunos do Colégio de Aplicação da UFRJ. Tendo como ponto de partida a pesquisa feita para a execução do documentário Vulgo Sacopã, que retratou as transformações da paisagem de um morro situado na Lagoa Rodrigo de Freitas na cidade do Rio de Janeiro, a prática pedagógica se dividiu em duas etapas. Primeiramente, os alunos foram apresentados a uma série de imagens históricas, buscando estimular a leitura da paisagem como um “texto” e identificar variações na sua interpretação. Posteriormente, foi realizado um trabalho de campo no intuito de percorrer a paisagem estudada e conhecer o personagem principal do filme exibido. Ao percorrer as trilhas do filme com os alunos, foi possível identificar uma série de entrelaçamentos que podem problematizar

  1. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  2. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    Science.gov (United States)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  3. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  4. AN AERIAL-IMAGE DENSE MATCHING APPROACH BASED ON OPTICAL FLOW FIELD

    Directory of Open Access Journals (Sweden)

    W. Yuan

    2016-06-01

    Full Text Available Dense matching plays an important role in many fields, such as DEM (digital evaluation model producing, robot navigation and 3D environment reconstruction. Traditional approaches may meet the demand of accuracy. But the calculation time and out puts density is hardly be accepted. Focus on the matching efficiency and complex terrain surface matching feasibility an aerial image dense matching method based on optical flow field is proposed in this paper. First, some high accurate and uniformed control points are extracted by using the feature based matching method. Then the optical flow is calculated by using these control points, so as to determine the similar region between two images. Second, the optical flow field is interpolated by using the multi-level B-spline interpolation in the similar region and accomplished the pixel by pixel coarse matching. Final, the results related to the coarse matching refinement based on the combined constraint, which recognizes the same points between images. The experimental results have shown that our method can achieve per-pixel dense matching points, the matching accuracy achieves sub-pixel level, and fully meet the three-dimensional reconstruction and automatic generation of DSM-intensive matching’s requirements. The comparison experiments demonstrated that our approach’s matching efficiency is higher than semi-global matching (SGM and Patch-based multi-view stereo matching (PMVS which verifies the feasibility and effectiveness of the algorithm.

  5. Intra-operative nuclear imaging based on positron-emitting radiotracers

    International Nuclear Information System (INIS)

    Shakir, Dzhoshkun Ismail

    2014-01-01

    Positron-emitting radiotracers are an important part of nuclear medical imaging processes. Besides the very famous glucose analog [ 18 F]FDG, many not so well known ones exist, among them the particularly interesting amino acid-based tracers like [ 18 F]FET. Although peri-operative imaging with positron-emitting radiotracers has become state-of- the-art in cases of many types of cancer, their capability is not fully exploited in the operating room yet. In this thesis we explore two intra-operative nuclear imaging modalities exploiting different aspects of positron radiation towards quality assurance in challenging surgical treatment scenarios. The first modality freehand PET provides a tomographic image of a volume of interest and aims at minimizing invasiveness by assisting the surgeon in pinpointing target structures marked with a radiotracer. The second imaging modality epiphanography generates an image of the radiotracer distribution on a surface of interest and aims at providing a means for improving the control of tumor resection margins. The word epiphanography is a compound of the Greek words επιφανεια (epiphaneia) for surface and ζωγραφια (ographia) for image, and hence means the image of the surface similar to the compound τομοζ (tomos) for slice/volume and ζωγραφια (ographia) for image, meaning the image of the volume, i.e. tomography. To our knowledge this is the first use of the word epiphanography in the context of nuclear medical imaging. In this thesis we present our approach to modeling, developing and calibrating these two novel imaging modalities. In addition, we present our work towards their clinical integration. In the case of freehand PET, we have already acquired the first intra-operative datasets of a patient. We present this first experience in the operating room together with our phantom studies. In the case of epiphanography, we present our phantom studies with neurosurgeons towards the integration of this

  6. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  7. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2009-01-01

    Full Text Available Abstract We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  8. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Mithun Das Gupta

    2009-01-01

    Full Text Available We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  9. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  10. Change detection for synthetic aperture radar images based on pattern and intensity distinctiveness analysis

    Science.gov (United States)

    Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang

    2018-04-01

    Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.

  11. Image-Based Fine-Scale Infrastructure Monitoring

    Science.gov (United States)

    Detchev, Ivan Denislavov

    Monitoring the physical health of civil infrastructure systems is an important task that must be performed frequently in order to ensure their serviceability and sustainability. Additionally, laboratory experiments where individual system components are tested on the fine-scale level provide essential information during the structural design process. This type of inspection, i.e., measurements of deflections and/or cracks, has traditionally been performed with instrumentation that requires access to, or contact with, the structural element being tested; performs deformation measurements in only one dimension or direction; and/or provides no permanent visual record. To avoid the downsides of such instrumentation, this dissertation proposes a remote sensing approach based on a photogrammetric system capable of three-dimensional reconstruction. The proposed system is low-cost, consists of off-the-shelf components, and is capable of reconstructing objects or surfaces with homogeneous texture. The scientific contributions of this research work address the drawbacks in currently existing literature. Methods for in-situ multi-camera system calibration and system stability analysis are proposed in addition to methods for deflection/displacement monitoring, and crack detection and characterization in three dimensions. The mathematical model for the system calibration is based on a single or multiple reference camera(s) and built-in relative orientation constraints where the interior orientation and the mounting parameters for all cameras are explicitly estimated. The methods for system stability analysis can be used to comprehensively check for the cumulative impact of any changes in the system parameters. They also provide a quantitative measure of this impact on the reconstruction process in terms of image space units. Deflection/displacement monitoring of dynamic surfaces in three dimensions is achieved with the system by performing an innovative sinusoidal fitting

  12. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  13. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  14. Fluid region segmentation in OCT images based on convolution neural network

    Science.gov (United States)

    Liu, Dong; Liu, Xiaoming; Fu, Tianyu; Yang, Zhou

    2017-07-01

    In the retinal image, characteristics of fluid have great significance for diagnosis in eye disease. In the clinical, the segmentation of fluid is usually conducted manually, but is time-consuming and the accuracy is highly depend on the expert's experience. In this paper, we proposed a segmentation method based on convolution neural network (CNN) for segmenting the fluid from fundus image. The B-scans of OCT are segmented into layers, and patches from specific region with annotation are used for training. After the data set being divided into training set and test set, network training is performed and a good segmentation result is obtained, which has a significant advantage over traditional methods such as threshold method.

  15. Augmented reality based real-time subcutaneous vein imaging system.

    Science.gov (United States)

    Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian

    2016-07-01

    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed.

  16. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  17. Image-guided linear accelerator-based spinal radiosurgery for hemangioblastoma.

    Science.gov (United States)

    Selch, Michael T; Tenn, Steve; Agazaryan, Nzhde; Lee, Steve P; Gorgulho, Alessandra; De Salles, Antonio A F

    2012-01-01

    To retrospectively review the efficacy and safety of image-guided linear accelerator-based radiosurgery for spinal hemangioblastomas. Between August 2004 and September 2010, nine patients with 20 hemangioblastomas underwent spinal radiosurgery. Five patients had von Hipple-Lindau disease. Four patients had multiple tumors. Ten tumors were located in the thoracic spine, eight in the cervical spine, and two in the lumbar spine. Tumor volume varied from 0.08 to 14.4 cc (median 0.72 cc). Maximum tumor dimension varied from 2.5 to 24 mm (median 10.5 mm). Radiosurgery was performed with a dedicated 6 MV linear accelerator equipped with a micro-multileaf collimator. Median peripheral tumor dose and prescription isodose were 12 Gy and 90%, respectively. Image guidance was performed by optical tracking of infrared reflectors, fusion of oblique radiographs with dynamically reconstructed digital radiographs, and automatic patient positioning. Follow-up varied from 14 to 86 months (median 51 months). Kaplan-Meier estimated 4-year overall and solid tumor local control rates were 90% and 95%, respectively. One tumor progressed 12 months after treatment and a new cyst developed 10 months after treatment in another tumor. There has been no clinical or imaging evidence for spinal cord injury. Results of this limited experience indicate linear accelerator-based radiosurgery is safe and effective for spinal cord hemangioblastomas. Longer follow-up is necessary to confirm the durability of tumor control, but these initial results imply linear accelerator-based radiosurgery may represent a therapeutic alternative to surgery for selected patients with spinal hemangioblastomas.

  18. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  19. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  20. Predicting Electron Population Characteristics in 2-D Using Multispectral Ground-Based Imaging

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Jahn, Jorg-Micha

    2018-01-01

    Ground-based imaging and in situ sounding rocket data are compared to electron transport modeling for an active inverted-V type auroral event. The Ground-to-Rocket Electrodynamics-Electrons Correlative Experiment (GREECE) mission successfully launched from Poker Flat, Alaska, on 3 March 2014 at 11:09:50 UT and reached an apogee of approximately 335 km over the aurora. Multiple ground-based electron-multiplying charge-coupled device (EMCCD) imagers were positioned at Venetie, Alaska, and aimed toward magnetic zenith. The imagers observed the intensity of different auroral emission lines (427.8, 557.7, and 844.6 nm) at the magnetic foot point of the rocket payload. Emission line intensity data are correlated with electron characteristics measured by the GREECE onboard electron spectrometer. A modified version of the GLobal airglOW (GLOW) model is used to estimate precipitating electron characteristics based on optical emissions. GLOW predicted the electron population characteristics with 20% error given the observed spectral intensities within 10° of magnetic zenith. Predictions are within 30% of the actual values within 20° of magnetic zenith for inverted-V-type aurora. Therefore, it is argued that this technique can be used, at least in certain types of aurora, such as the inverted-V type presented here, to derive 2-D maps of electron characteristics. These can then be used to further derive 2-D maps of ionospheric parameters as a function of time, based solely on multispectral optical imaging data.

  1. Image quality assessment based on multiscale geometric analysis.

    Science.gov (United States)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2009-07-01

    Reduced-reference (RR) image quality assessment (IQA) has been recognized as an effective and efficient way to predict the visual quality of distorted images. The current standard is the wavelet-domain natural image statistics model (WNISM), which applies the Kullback-Leibler divergence between the marginal distributions of wavelet coefficients of the reference and distorted images to measure the image distortion. However, WNISM fails to consider the statistical correlations of wavelet coefficients in different subbands and the visual response characteristics of the mammalian cortical simple cells. In addition, wavelet transforms are optimal greedy approximations to extract singularity structures, so they fail to explicitly extract the image geometric information, e.g., lines and curves. Finally, wavelet coefficients are dense for smooth image edge contours. In this paper, to target the aforementioned problems in IQA, we develop a novel framework for IQA to mimic the human visual system (HVS) by incorporating the merits from multiscale geometric analysis (MGA), contrast sensitivity function (CSF), and the Weber's law of just noticeable difference (JND). In the proposed framework, MGA is utilized to decompose images and then extract features to mimic the multichannel structure of HVS. Additionally, MGA offers a series of transforms including wavelet, curvelet, bandelet, contourlet, wavelet-based contourlet transform (WBCT), and hybrid wavelets and directional filter banks (HWD), and different transforms capture different types of image geometric information. CSF is applied to weight coefficients obtained by MGA to simulate the appearance of images to observers by taking into account many of the nonlinearities inherent in HVS. JND is finally introduced to produce a noticeable variation in sensory experience. Thorough empirical studies are carried out upon the LIVE database against subjective mean opinion score (MOS) and demonstrate that 1) the proposed framework has

  2. Comparing orbiter and rover image-based mapping of an ancient sedimentary environment, Aeolis Palus, Gale crater, Mars

    Science.gov (United States)

    Stack, Kathryn M.; Edwards, Christopher; Grotzinger, J. P.; Gupta, S.; Sumner, D.; Edgar, Lauren; Fraeman, A.; Jacob, S.; LeDeit, L.; Lewis, K.W.; Rice, M.S.; Rubin, D.; Calef, F.; Edgett, K.; Williams, R.M.E.; Williford, K.H.

    2016-01-01

    This study provides the first systematic comparison of orbital facies maps with detailed ground-based geology observations from the Mars Science Laboratory (MSL) Curiosity rover to examine the validity of geologic interpretations derived from orbital image data. Orbital facies maps were constructed for the Darwin, Cooperstown, and Kimberley waypoints visited by the Curiosity rover using High Resolution Imaging Science Experiment (HiRISE) images. These maps, which represent the most detailed orbital analysis of these areas to date, were compared with rover image-based geologic maps and stratigraphic columns derived from Curiosity’s Mast Camera (Mastcam) and Mars Hand Lens Imager (MAHLI). Results show that bedrock outcrops can generally be distinguished from unconsolidated surficial deposits in high-resolution orbital images and that orbital facies mapping can be used to recognize geologic contacts between well-exposed bedrock units. However, process-based interpretations derived from orbital image mapping are difficult to infer without known regional context or observable paleogeomorphic indicators, and layer-cake models of stratigraphy derived from orbital maps oversimplify depositional relationships as revealed from a rover perspective. This study also shows that fine-scale orbital image-based mapping of current and future Mars landing sites is essential for optimizing the efficiency and science return of rover surface operations.

  3. Feature-based Alignment of Volumetric Multi-modal Images

    Science.gov (United States)

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  4. Biometric image enhancement using decision rule based image fusion techniques

    Science.gov (United States)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  5. Image-based reflectance conversion of ASTER and IKONOS ...

    African Journals Online (AJOL)

    Spectral signatures derived from different image-based models for ASTER and IKONOS were inspected visually as first departure. This was followed by comparison of the total accuracy and Kappa index computed from supervised classification of images that were derived from different image-based atmospheric correction ...

  6. Image compression software for the SOHO LASCO and EIT experiments

    Science.gov (United States)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  7. Hurricane Imaging Radiometer Wind Speed and Rain Rate Retrievals during the 2010 GRIP Flight Experiment

    Science.gov (United States)

    Sahawneh, Saleem; Farrar, Spencer; Johnson, James; Jones, W. Linwood; Roberts, Jason; Biswas, Sayak; Cecil, Daniel

    2014-01-01

    Microwave remote sensing observations of hurricanes, from NOAA and USAF hurricane surveillance aircraft, provide vital data for hurricane research and operations, for forecasting the intensity and track of tropical storms. The current operational standard for hurricane wind speed and rain rate measurements is the Stepped Frequency Microwave Radiometer (SFMR), which is a nadir viewing passive microwave airborne remote sensor. The Hurricane Imaging Radiometer, HIRAD, will extend the nadir viewing SFMR capability to provide wide swath images of wind speed and rain rate, while flying on a high altitude aircraft. HIRAD was first flown in the Genesis and Rapid Intensification Processes, GRIP, NASA hurricane field experiment in 2010. This paper reports on geophysical retrieval results and provides hurricane images from GRIP flights. An overview of the HIRAD instrument and the radiative transfer theory based, wind speed/rain rate retrieval algorithm is included. Results are presented for hurricane wind speed and rain rate for Earl and Karl, with comparison to collocated SFMR retrievals and WP3D Fuselage Radar images for validation purposes.

  8. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  9. Pedestrian detection from thermal images: A sparse representation based approach

    Science.gov (United States)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  10. Automated pathologies detection in retina digital images based on complex continuous wavelet transform phase angles.

    Science.gov (United States)

    Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel

    2014-10-01

    An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.

  11. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering.

    Science.gov (United States)

    Liu, Guohua; Wang, Ziyu; Mu, Guoying; Li, Peijin

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments.

  12. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.

    Science.gov (United States)

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-11-24

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

  13. A homeostatic, chip-based platform for zebrafish larvae immobilization and long-term imaging

    Science.gov (United States)

    Friedrich, Timo; Zhu, Feng; Wlodkowic, Donald; Kaslin, Jan

    2015-12-01

    Zebrafish larvae are ideal for toxicology and drug screens due to their transparency, small size and similarity to humans on the genetic level. Using modern imaging techniques, cells and tissues can be dynamically visualised and followed over days in multiple zebrafish. Yet continued imaging experiments require specialized conditions such as: moisture and heat control to maintain specimen homeostasis. Chambers that control the environment are generally very expensive and are not always available for all imaging platforms. A highly customizable mounting configuration with built-in means of controlling temperature and media flow would therefore be a valuable tool for long term imaging experiments. Rapid prototyping using 3D printing is particularly suitable as a production method as it offers high flexibility in design, is widely available and allows a high degree of customizing. We study neural regeneration in zebrafish. Regeneration is limited in humans, but zebrafish recover from neural damage within days. Yet, the underlying regenerative mechanisms remain unclear. We developed an agarose based mounting system that holds the embryos in defined positions along removable strips. Homeostasis and temperature control is ensured by channels circulating buffer and heated water. This allows to image up to 120 larvae simultaneously for more than two days. Its flexibility and the low-volume, high larvae ratio will allow screening of small compound libraries. Taken together, we offer a low cost, highly adaptable solution for long term in-vivo imaging.

  14. Image-based corrosion recognition for ship steel structures

    Science.gov (United States)

    Ma, Yucong; Yang, Yang; Yao, Yuan; Li, Shengyuan; Zhao, Xuefeng

    2018-03-01

    Ship structures are subjected to corrosion inevitably in service. Existed image-based methods are influenced by the noises in images because they recognize corrosion by extracting features. In this paper, a novel method of image-based corrosion recognition for ship steel structures is proposed. The method utilizes convolutional neural networks (CNN) and will not be affected by noises in images. A CNN used to recognize corrosion was designed through fine-turning an existing CNN architecture and trained by datasets built using lots of images. Combining the trained CNN classifier with a sliding window technique, the corrosion zone in an image can be recognized.

  15. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    Science.gov (United States)

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2017-08-01

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  16. A New 4D Imaging Method for Three-Phase Analogue Experiments in Volcanology (and Other Three-Phase Systems)

    Science.gov (United States)

    Oppenheimer, J.; Patel, K. B.; Lev, E.; Hillman, E. M. C.

    2017-12-01

    Bubbles and crystals suspended in magmas interact with each other on a small scale, which affects large-scale volcanic processes. Studying these interactions on relevant scales of time and space is a long-standing challenge. Therefore, the fundamental explanations for the behavior of bubble- and crystal-rich magmas are still largely speculative. Recent application of X-ray tomography to experiments with synthetic magmas has already improved our understanding of small-scale 4D (3D + time) phenomena. However, this technique has low imaging rates Confocally Aligned Planar Excitation (SCAPE) microscopy. This method based on laser-fluorescence has been used to image live biological processes at high speed and in 3D. It allows imaging rates of up to several hundred vps and image volumes up to 1 x 1 x 0.5 mm3, with a trade-off between speed and spatial resolution. We ran two sets of experiments with silicone oil and soda-lime glass beads of <50 µm diameter, contained within a vertical glass casing 50 x 5 x 4 mm3. We used two different bubble generation methods. In the first set of experiments, small air bubbles (< 1 mm) were introduced through a hole at the bottom of the sample and allowed to rise through a suspension with low-viscosity oil. We successfully imaged bubble rise and particle movements around the bubble. In the second set, bubbles were generated by mixing acetone into the suspension and decreasing the surface pressure to cause a phase change to gaseous acetone. This bubble generation method compared favorably with previous gum rosin-acetone experiments: they provided similar degassing behaviors, along with more control on suspension viscosity and optimal optical properties for laser transmission. Large volumes of suspended bubbles, however, interfered with the laser path. In this set, we were able to track bubble nucleation sites and nucleation rates in 4D. This promising technique allows the study of small-scale interactions in two- and three-phase systems

  17. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    Science.gov (United States)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  18. Binary-space-partitioned images for resolving image-based visibility.

    Science.gov (United States)

    Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J

    2004-01-01

    We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.

  19. Overview of 3-year experience with large-scale electronic portal imaging device-based 3-dimensional transit dosimetry

    NARCIS (Netherlands)

    Mijnheer, Ben J.; González, Patrick; Olaciregui-Ruiz, Igor; Rozendaal, Roel A.; van Herk, Marcel; Mans, Anton

    2015-01-01

    To assess the usefulness of electronic portal imaging device (EPID)-based 3-dimensional (3D) transit dosimetry in a radiation therapy department by analyzing a large set of dose verification results. In our institution, routine in vivo dose verification of all treatments is performed by means of 3D

  20. Description-based reappraisal regulate the emotion induced by erotic and neutral images in a Chinese population.

    Science.gov (United States)

    Peng, Jiaxin; Qu, Chen; Gu, Ruolei; Luo, Yue-Jia

    2012-01-01

    Previous emotion-regulation research has shown that the late positive potential (LPP) is sensitive to the down-regulation of emotion; however, whether LPP is also sensitive to the up-regulation of emotion remains unclear. The present study examined the description-based reappraisal effects on the up-regulation of positive emotions induced by erotic and neutral images in a Chinese population. Self-reported ratings and event-related potential (ERP) were recorded when subjects viewed pleasant and neutral images, which were shown after either a neutral or positive description. Self-reported results showed that images following positive descriptions were rated as more pleasant compared to images following neutral descriptions. ERP results revealed that the P2, P3, and slow wave (SW) components were larger for erotic pictures than for neutral pictures, while the positive description condition yielded attenuated erotic image-induced P2, P3 and SW and increased SW induced by neutral images. The results demonstrated that description-based reappraisal, as a method of reappraisal, significantly modulates the emotional experience and ERP responses to erotic and neutral images.

  1. Description-based reappraisal regulate the emotion induced by erotic and neutral images in a Chinese population

    Directory of Open Access Journals (Sweden)

    Jiaxin ePeng

    2013-01-01

    Full Text Available Previous emotion-regulation research has shown that the late positive potential (LPP is sensitive to the down-regulation of emotion; however, whether LPP is also sensitive to the up-regulation of emotion remains unclear. The present study examined the description-based reappraisal effects on the up-regulation of positive emotions induced by erotic and neutral images in a Chinese population. Self-reported ratings and event-related potential (ERP were recorded when subjects viewed pleasant and neutral images, which were shown after either a neutral or positive description. Self-reported results showed that images following positive descriptions were rated as more pleasant compared to images following neutral descriptions. ERP results revealed that the P2, P3, and slow wave (SW components were larger for erotic pictures than for neutral pictures, while the positive description condition yielded attenuated erotic image-induced P2, P3 and SW and increased SW induced by neutral images. The results demonstrated that description-based reappraisal, as a method of reappraisal, significantly modulates the emotional experience and ERP responses to erotic and neutral images.

  2. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 and Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States)

    2010-05-15

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  3. Optimization of reference library used in content-based medical image retrieval scheme

    International Nuclear Information System (INIS)

    Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin

    2007-01-01

    Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A z ) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A z =0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A z =0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance

  4. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    International Nuclear Information System (INIS)

    Rybovic, Michala; Halkett, Georgia K.; Banati, Richard B.; Cox, Jennifer

    2008-01-01

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour

  5. Preliminary experiment of fast neutron imaging with direct-film method

    International Nuclear Information System (INIS)

    Pei Yuyang; Tang Guoyou; Guo Zhiyu; Zhang Guohui

    2005-01-01

    A preliminary experiment is conducted with direct-film method under the condition that fast neutron is generated by the reaction of 9 Be(d, n) on the Beijing University 4.5 MV Van de Graaff, whose energy is lower than 7 MeV. Basic characteristics of direct-film neutron radiography system are investigated with the help of samples in different materials, different thickness and holes of different diameter. The fast neutron converter, which is vital for fast neutron imaging, is produced with the materials made in China. The result indicates that fast neutron converter can meet the requirement of fast neutron imaging; further research of fast neutron imaging can be conducted on the accelerator and neutron-generator in China. (authors)

  6. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  7. Comparison on Integer Wavelet Transforms in Spherical Wavelet Based Image Based Relighting

    Institute of Scientific and Technical Information of China (English)

    WANGZe; LEEYin; LEUNGChising; WONGTientsin; ZHUYisheng

    2003-01-01

    To provide a good quality rendering in the Image based relighting (IBL) system, tremendous reference images under various illumination conditions are needed. Therefore data compression is essential to enable interactive action. And the rendering speed is another crucial consideration for real applications. Based on Spherical wavelet transform (SWT), this paper presents a quick representation method with Integer wavelet transform (IWT) for the IBL system. It focuses on comparison on different IWTs with the Embedded zerotree wavelet (EZW) used in the IBL system. The whole compression procedure contains two major compression steps. Firstly, SWT is applied to consider the correlation among different reference images. Secondly, the SW transformed images are compressed with IWT based image compression approach. Two IWTs are used and good results are showed in the simulations.

  8. Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning

    Directory of Open Access Journals (Sweden)

    Ye Yao

    2018-04-01

    Full Text Available Computer-generated graphics (CGs are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1 the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2 the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75.

  9. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  10. Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD

    Science.gov (United States)

    Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun

    2017-12-01

    This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.

  11. High-speed MRF-based segmentation algorithm using pixonal images

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Hassanpour, H.; Naimi, H. M.

    2013-01-01

    Segmentation is one of the most complicated procedures in the image processing that has important role in the image analysis. In this paper, an improved pixon-based method for image segmentation is proposed. In proposed algorithm, complex partial differential equations (PDEs) is used as a kernel...... function to make pixonal image. Using this kernel function causes noise on images to reduce and an image not to be over-segment when the pixon-based method is used. Utilising the PDE-based method leads to elimination of some unnecessary details and results in a fewer pixon number, faster performance...... and more robustness against unwanted environmental noises. As the next step, the appropriate pixons are extracted and eventually, we segment the image with the use of a Markov random field. The experimental results indicate that the proposed pixon-based approach has a reduced computational load...

  12. Edge-Based Image Compression with Homogeneous Diffusion

    Science.gov (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  13. Multi-clues image retrieval based on improved color invariants

    Science.gov (United States)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  14. Bodily Deviations and Body Image in Adolescence

    Science.gov (United States)

    Vilhjalmsson, Runar; Kristjansdottir, Gudrun; Ward, Dianne S.

    2012-01-01

    Adolescents with unusually sized or shaped bodies may experience ridicule, rejection, or exclusion based on their negatively valued bodily characteristics. Such experiences can have negative consequences for a person's image and evaluation of self. This study focuses on the relationship between bodily deviations and body image and is based on a…

  15. Optical image encryption system using nonlinear approach based on biometric authentication

    Science.gov (United States)

    Verma, Gaurav; Sinha, Aloka

    2017-07-01

    A nonlinear image encryption scheme using phase-truncated Fourier transform (PTFT) and natural logarithms is proposed in this paper. With the help of the PTFT, the input image is truncated into phase and amplitude parts at the Fourier plane. The phase-only information is kept as the secret key for the decryption, and the amplitude distribution is modulated by adding an undercover amplitude random mask in the encryption process. Furthermore, the encrypted data is kept hidden inside the face biometric-based phase mask key using the base changing rule of logarithms for secure transmission. This phase mask is generated through principal component analysis. Numerical experiments show the feasibility and the validity of the proposed nonlinear scheme. The performance of the proposed scheme has been studied against the brute force attacks and the amplitude-phase retrieval attack. Simulation results are presented to illustrate the enhanced system performance with desired advantages in comparison to the linear cryptosystem.

  16. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    Science.gov (United States)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  17. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    Science.gov (United States)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  18. Content-based image retrieval applied to bone age assessment

    Science.gov (United States)

    Fischer, Benedikt; Brosig, André; Welter, Petra; Grouls, Christoph; Günther, Rolf W.; Deserno, Thomas M.

    2010-03-01

    Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated. For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas. The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs, two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.

  19. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  20. Image magnification based on similarity analogy

    International Nuclear Information System (INIS)

    Chen Zuoping; Ye Zhenglin; Wang Shuxun; Peng Guohua

    2009-01-01

    Aiming at the high time complexity of the decoding phase in the traditional image enlargement methods based on fractal coding, a novel image magnification algorithm is proposed in this paper, which has the advantage of iteration-free decoding, by using the similarity analogy between an image and its zoom-out and zoom-in. A new pixel selection technique is also presented to further improve the performance of the proposed method. Furthermore, by combining some existing fractal zooming techniques, an efficient image magnification algorithm is obtained, which can provides the image quality as good as the state of the art while greatly decrease the time complexity of the decoding phase.

  1. Complex adaptation-based LDR image rendering for 3D image reconstruction

    Science.gov (United States)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  2. A cyanobenzo[a]phenoxazine-based near infrared lysosome-tracker for in cellulo imaging.

    Science.gov (United States)

    Sun, Ru; Liu, Wu; Xu, Yu-Jie; Lu, Jian-Mei; Ge, Jian-Feng; Ihara, Masataka

    2013-11-25

    A cyanobenzo[a]phenoxazine-based pH probe with pKa = 5.0 exhibits OFF-ON emission at 625-850 nm upon excitation at 600 nm in aqueous buffers. The in cellulo imaging experiments with HeLa cells indicate that the probe can serve as a lysosome-specific probe under red light excitation (633 nm) with near infrared emission (650-790 nm).

  3. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  4. A Variable Order Fractional Differential-Based Texture Enhancement Algorithm with Application in Medical Imaging.

    Directory of Open Access Journals (Sweden)

    Qiang Yu

    Full Text Available Texture enhancement is one of the most important techniques in digital image processing and plays an essential role in medical imaging since textures discriminate information. Most image texture enhancement techniques use classical integral order differential mask operators or fractional differential mask operators using fixed fractional order. These masks can produce excessive enhancement of low spatial frequency content, insufficient enhancement of large spatial frequency content, and retention of high spatial frequency noise. To improve upon existing approaches of texture enhancement, we derive an improved Variable Order Fractional Centered Difference (VOFCD scheme which dynamically adjusts the fractional differential order instead of fixing it. The new VOFCD technique is based on the second order Riesz fractional differential operator using a Lagrange 3-point interpolation formula, for both grey scale and colour image enhancement. We then use this method to enhance photographs and a set of medical images related to patients with stroke and Parkinson's disease. The experiments show that our improved fractional differential mask has a higher signal to noise ratio value than the other fractional differential mask operators. Based on the corresponding quantitative analysis we conclude that the new method offers a superior texture enhancement over existing methods.

  5. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  6. TSV last for hybrid pixel detectors: Application to particle physics and imaging experiments

    CERN Document Server

    Henry, D; Berthelot, A; Cuchet, R; Chantre, C; Campbell, M

    Hybrid pixel detectors are now widely used in particle physics experiments and at synchrotron light sources. They have also stimulated growing interest in other fields and, in particular, in medical imaging. Through the continuous pursuit of miniaturization in CMOS it has been possible to increase the functionality per pixel while maintaining or even shrinking pixel dimensions. The main constraint on the more extensive use of the technology in all fields is the cost of module building and the difficulty of covering large areas seamlessly [1]. On another hand, in the field of electronic component integration, a new approach has been developed in the last years, called 3D Integration. This concept, based on using the vertical axis for component integration, allows improving the global performance of complex systems. Thanks to this technology, the cost and the form factor of components could be decreased and the performance of the global system could be enhanced. In the field of radiation imaging detectors the a...

  7. ℓ0 Gradient Minimization Based Image Reconstruction for Limited-Angle Computed Tomography.

    Directory of Open Access Journals (Sweden)

    Wei Yu

    Full Text Available In medical and industrial applications of computed tomography (CT imaging, limited by the scanning environment and the risk of excessive X-ray radiation exposure imposed to the patients, reconstructing high quality CT images from limited projection data has become a hot topic. X-ray imaging in limited scanning angular range is an effective imaging modality to reduce the radiation dose to the patients. As the projection data available in this modality are incomplete, limited-angle CT image reconstruction is actually an ill-posed inverse problem. To solve the problem, image reconstructed by conventional filtered back projection (FBP algorithm frequently results in conspicuous streak artifacts and gradual changed artifacts nearby edges. Image reconstruction based on total variation minimization (TVM can significantly reduce streak artifacts in few-view CT, but it suffers from the gradual changed artifacts nearby edges in limited-angle CT. To suppress this kind of artifacts, we develop an image reconstruction algorithm based on ℓ0 gradient minimization for limited-angle CT in this paper. The ℓ0-norm of the image gradient is taken as the regularization function in the framework of developed reconstruction model. We transformed the optimization problem into a few optimization sub-problems and then, solved these sub-problems in the manner of alternating iteration. Numerical experiments are performed to validate the efficiency and the feasibility of the developed algorithm. From the statistical analysis results of the performance evaluations peak signal-to-noise ratio (PSNR and normalized root mean square distance (NRMSD, it shows that there are significant statistical differences between different algorithms from different scanning angular ranges (p<0.0001. From the experimental results, it also indicates that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed

  8. Jet-Based Local Image Descriptors

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Darkner, Sune; Dahl, Anders Lindbjerg

    2012-01-01

    We present a general novel image descriptor based on higherorder differential geometry and investigate the effect of common descriptor choices. Our investigation is twofold in that we develop a jet-based descriptor and perform a comparative evaluation with current state-of-the-art descriptors on ...

  9. Extracting flat-field images from scene-based image sequences using phase correlation

    Energy Technology Data Exchange (ETDEWEB)

    Caron, James N., E-mail: Caron@RSImd.com [Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 (United States); Montes, Marcos J. [Naval Research Laboratory, Code 7231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Obermark, Jerome L. [Naval Research Laboratory, Code 8231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States)

    2016-06-15

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  10. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    Science.gov (United States)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  11. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    Directory of Open Access Journals (Sweden)

    Byeong Hak Kim

    2017-12-01

    Full Text Available Unmanned aerial vehicles (UAVs are equipped with optical systems including an infrared (IR camera such as electro-optical IR (EO/IR, target acquisition and designation sights (TADS, or forward looking IR (FLIR. However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC and scene-based NUC (SBNUC. However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA. In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR and long wave infrared (LWIR images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  12. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    Science.gov (United States)

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-01-01

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970

  13. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications.

    Science.gov (United States)

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-12-27

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  14. Stokes vector based interpolation method to improve the efficiency of bio-inspired polarization-difference imaging in turbid media

    Science.gov (United States)

    Guan, Jinge; Ren, Wei; Cheng, Yaoyu

    2018-04-01

    We demonstrate an efficient polarization-difference imaging system in turbid conditions by using the Stokes vector of light. The interaction of scattered light with the polarizer is analyzed by the Stokes-Mueller formalism. An interpolation method is proposed to replace the mechanical rotation of the polarization axis of the analyzer theoretically, and its performance is verified by the experiment at different turbidity levels. We show that compared with direct imaging, the Stokes vector based imaging method can effectively reduce the effect of light scattering and enhance the image contrast.

  15. Dynamic imaging through turbid media based on digital holography.

    Science.gov (United States)

    Li, Shiping; Zhong, Jingang

    2014-03-01

    Imaging through turbid media using visible or IR light instead of harmful x ray is still a challenging problem, especially in dynamic imaging. A method of dynamic imaging through turbid media using digital holography is presented. In order to match the coherence length between the dynamic object wave and the reference wave, a cw laser is used. To solve the problem of difficult focusing in imaging through turbid media, an autofocus technology is applied. To further enhance the image contrast, a spatial filtering technique is used. A description of digital holography and experiments of imaging the objects hidden in turbid media are presented. The experimental result shows that dynamic images of the objects can be achieved by the use of digital holography.

  16. Image Registration-Based Bolt Loosening Detection of Steel Joints

    Science.gov (United States)

    2018-01-01

    Self-loosening of bolts caused by repetitive loads and vibrations is one of the common defects that can weaken the structural integrity of bolted steel joints in civil structures. Many existing approaches for detecting loosening bolts are based on physical sensors and, hence, require extensive sensor deployment, which limit their abilities to cost-effectively detect loosened bolts in a large number of steel joints. Recently, computer vision-based structural health monitoring (SHM) technologies have demonstrated great potential for damage detection due to the benefits of being low cost, easy to deploy, and contactless. In this study, we propose a vision-based non-contact bolt loosening detection method that uses a consumer-grade digital camera. Two images of the monitored steel joint are first collected during different inspection periods and then aligned through two image registration processes. If the bolt experiences rotation between inspections, it will introduce differential features in the registration errors, serving as a good indicator for bolt loosening detection. The performance and robustness of this approach have been validated through a series of experimental investigations using three laboratory setups including a gusset plate on a cross frame, a column flange, and a girder web. The bolt loosening detection results are presented for easy interpretation such that informed decisions can be made about the detected loosened bolts. PMID:29597264

  17. Image Registration-Based Bolt Loosening Detection of Steel Joints.

    Science.gov (United States)

    Kong, Xiangxiong; Li, Jian

    2018-03-28

    Self-loosening of bolts caused by repetitive loads and vibrations is one of the common defects that can weaken the structural integrity of bolted steel joints in civil structures. Many existing approaches for detecting loosening bolts are based on physical sensors and, hence, require extensive sensor deployment, which limit their abilities to cost-effectively detect loosened bolts in a large number of steel joints. Recently, computer vision-based structural health monitoring (SHM) technologies have demonstrated great potential for damage detection due to the benefits of being low cost, easy to deploy, and contactless. In this study, we propose a vision-based non-contact bolt loosening detection method that uses a consumer-grade digital camera. Two images of the monitored steel joint are first collected during different inspection periods and then aligned through two image registration processes. If the bolt experiences rotation between inspections, it will introduce differential features in the registration errors, serving as a good indicator for bolt loosening detection. The performance and robustness of this approach have been validated through a series of experimental investigations using three laboratory setups including a gusset plate on a cross frame, a column flange, and a girder web. The bolt loosening detection results are presented for easy interpretation such that informed decisions can be made about the detected loosened bolts.

  18. Single image super-resolution based on convolutional neural networks

    Science.gov (United States)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  19. Sociocultural Experiences of Bulimic and Non-Bulimic Adolescents in a School-Based Chinese Sample

    Science.gov (United States)

    Jackson, Todd; Chen, Hong

    2010-01-01

    From a large school-based sample (N = 3,084), 49 Mainland Chinese adolescents (31 girls, 18 boys) who endorsed all DSM-IV criteria for bulimia nervosa (BN) or sub-threshold BN and 49 matched controls (31 girls, 18 boys) completed measures of demographics and sociocultural experiences related to body image. Compared to less symptomatic peers, those…

  20. Improved image retrieval based on fuzzy colour feature vector

    Science.gov (United States)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  1. Canny edge-based deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-07

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  2. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  3. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  4. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    Science.gov (United States)

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  5. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  6. Rethinking and Redesigning an Image Processing Course from a Problem-Based Learning Perspective

    DEFF Research Database (Denmark)

    Reng, Lars; Triantafyllou, Evangelia; Triantafyllidis, George

    2015-01-01

    of such concepts and being able to use them for solving real-world problems. The Problem-Based Learning (PBL) pedagogy is an approach, which favours learning by applying knowledge to solve such problems. However, formulating an appropriate project for image processing courses presents challenges on how......Our experience at the Media Technology department, Aalborg University Copenhangen has shown that learning core concepts and techniques in image processing is a challenge for undergraduate students. One possible cause for this is the gap between understanding the mathematical formalism...... to appropriately present relevant concepts and techniques to students. This article presents our redesign of an image processing course at the Media Technology department, which focused on relevant concept and technique presentation and design projects and employed a game engine (Unity) in order to present...

  7. The AlpArray Seismic Network: A Large-Scale European Experiment to Image the Alpine Orogen

    Science.gov (United States)

    Hetényi, György; Molinari, Irene; Clinton, John; Bokelmann, Götz; Bondár, István; Crawford, Wayne C.; Dessa, Jean-Xavier; Doubre, Cécile; Friederich, Wolfgang; Fuchs, Florian; Giardini, Domenico; Gráczer, Zoltán; Handy, Mark R.; Herak, Marijan; Jia, Yan; Kissling, Edi; Kopp, Heidrun; Korn, Michael; Margheriti, Lucia; Meier, Thomas; Mucciarelli, Marco; Paul, Anne; Pesaresi, Damiano; Piromallo, Claudia; Plenefisch, Thomas; Plomerová, Jaroslava; Ritter, Joachim; Rümpker, Georg; Šipka, Vesna; Spallarossa, Daniele; Thomas, Christine; Tilmann, Frederik; Wassermann, Joachim; Weber, Michael; Wéber, Zoltán; Wesztergom, Viktor; Živčić, Mladen

    2018-04-01

    The AlpArray programme is a multinational, European consortium to advance our understanding of orogenesis and its relationship to mantle dynamics, plate reorganizations, surface processes and seismic hazard in the Alps-Apennines-Carpathians-Dinarides orogenic system. The AlpArray Seismic Network has been deployed with contributions from 36 institutions from 11 countries to map physical properties of the lithosphere and asthenosphere in 3D and thus to obtain new, high-resolution geophysical images of structures from the surface down to the base of the mantle transition zone. With over 600 broadband stations operated for 2 years, this seismic experiment is one of the largest simultaneously operated seismological networks in the academic domain, employing hexagonal coverage with station spacing at less than 52 km. This dense and regularly spaced experiment is made possible by the coordinated coeval deployment of temporary stations from numerous national pools, including ocean-bottom seismometers, which were funded by different national agencies. They combine with permanent networks, which also required the cooperation of many different operators. Together these stations ultimately fill coverage gaps. Following a short overview of previous large-scale seismological experiments in the Alpine region, we here present the goals, construction, deployment, characteristics and data management of the AlpArray Seismic Network, which will provide data that is expected to be unprecedented in quality to image the complex Alpine mountains at depth.

  8. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A distribution-based parametrization for improved tomographic imaging of solute plumes

    Science.gov (United States)

    Pidlisecky, Adam; Singha, K.; Day-Lewis, F. D.

    2011-01-01

    Difference geophysical tomography (e.g. radar, resistivity and seismic) is used increasingly for imaging fluid flow and mass transport associated with natural and engineered hydrologic phenomena, including tracer experiments, in situ remediation and aquifer storage and recovery. Tomographic data are collected over time, inverted and differenced against a background image to produce 'snapshots' revealing changes to the system; these snapshots readily provide qualitative information on the location and morphology of plumes of injected tracer, remedial amendment or stored water. In principle, geometric moments (i.e. total mass, centres of mass, spread, etc.) calculated from difference tomograms can provide further quantitative insight into the rates of advection, dispersion and mass transfer; however, recent work has shown that moments calculated from tomograms are commonly biased, as they are strongly affected by the subjective choice of regularization criteria. Conventional approaches to regularization (Tikhonov) and parametrization (image pixels) result in tomograms which are subject to artefacts such as smearing or pixel estimates taking on the sign opposite to that expected for the plume under study. Here, we demonstrate a novel parametrization for imaging plumes associated with hydrologic phenomena. Capitalizing on the mathematical analogy between moment-based descriptors of plumes and the moment-based parameters of probability distributions, we design an inverse problem that (1) is overdetermined and computationally efficient because the image is described by only a few parameters, (2) produces tomograms consistent with expected plume behaviour (e.g. changes of one sign relative to the background image), (3) yields parameter estimates that are readily interpreted for plume morphology and offer direct insight into hydrologic processes and (4) requires comparatively few data to achieve reasonable model estimates. We demonstrate the approach in a series of

  10. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  11. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    Science.gov (United States)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  12. Clinical multi-colour fluorescence imaging of malignant tumours - initial experience

    International Nuclear Information System (INIS)

    Svanberg, K.; Wang, I.; Montan, S.; Andersson-Engels, S.; Svanberg, S.; Lund Inst. of Technology

    1998-01-01

    The purpose of this study was to present a new technique for non-invasive tumour detection based on tissue fluorescence imaging. A clinically adapted multi-colour fluorescence system was employed in the real-time imaging of malignant tumours of the skin, breast, head and neck region, and urinary bladder. Tumour detection was based on the contrast displayed in fluorescence between normal and malignant tissue, related to the selective uptake of tumour-marking agents and natural chromophore differences between various tissues. In order to demarcate basal cell carcinomas of the skin, ALA was applied topically 4-6 h before the fluorescence investigation. For urinary bladder tumour visualisation, ALA was instilled into the bladder 1-2 h prior to the study. Malignant and premalignant lesions in the head and neck region were imaged after i.v. injection of HPD (Photofrin). The tumour imaging system was coupled to an endoscope. Fluorescence light emission from the tissue surface was induced with 100-ns-long optical pulses at 390 nm, generated from a frequency-doubled alexandrite laser. With the use of special image-splitting optics, the tumour fluorescence, intensified in a micro-channel plate, was imaged in 3 selected wavelength bands. These 3 images were processed together to form a new optimised-contrast image of the tumour. This image, updated at a rate of about 3 frames/s was mixed with a normal colour video image of the tissue. A clear demarcation from normal surrounding tissue was found during in vivo measurements of superficial bladder carcinoma, basal cell carcinoma of the skin, and leukoplakia with dysplasia of the lip, and in vitro investigations of resected breast cancer. (orig./MG)

  13. Dialog-based Interactive Image Retrieval

    OpenAIRE

    Guo, Xiaoxiao; Wu, Hui; Cheng, Yu; Rennie, Steven; Feris, Rogerio Schmidt

    2018-01-01

    Existing methods for interactive image retrieval have demonstrated the merit of integrating user feedback, improving retrieval results. However, most current systems rely on restricted forms of user feedback, such as binary relevance responses, or feedback based on a fixed set of relative attributes, which limits their impact. In this paper, we introduce a new approach to interactive image search that enables users to provide feedback via natural language, allowing for more natural and effect...

  14. Background suppression of infrared small target image based on inter-frame registration

    Science.gov (United States)

    Ye, Xiubo; Xue, Bindang

    2018-04-01

    We propose a multi-frame background suppression method for remote infrared small target detection. Inter-frame information is necessary when the heavy background clutters make it difficult to distinguish real targets and false alarms. A registration procedure based on points matching in image patches is used to compensate the local deformation of background. Then the target can be separated by background subtraction. Experiments show our method serves as an effective preliminary of target detection.

  15. Skull base tumours part I: Imaging technique, anatomy and anterior skull base tumours

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Alexandra [Instituto Portugues de Oncologia Francisco Gentil, Centro de Lisboa, Servico de Radiologia, Rua Professor Lima Basto, 1093 Lisboa Codex (Portugal)], E-mail: borgesalexandra@clix.pt

    2008-06-15

    Advances in cross-sectional imaging, surgical technique and adjuvant treatment have largely contributed to ameliorate the prognosis, lessen the morbidity and mortality of patients with skull base tumours and to the growing medical investment in the management of these patients. Because clinical assessment of the skull base is limited, cross-sectional imaging became indispensable in the diagnosis, treatment planning and follow-up of patients with suspected skull base pathology and the radiologist is increasingly responsible for the fate of these patients. This review will focus on the advances in imaging technique; contribution to patient's management and on the imaging features of the most common tumours affecting the anterior skull base. Emphasis is given to a systematic approach to skull base pathology based upon an anatomic division taking into account the major tissue constituents in each skull base compartment. The most relevant information that should be conveyed to surgeons and radiation oncologists involved in patient's management will be discussed.

  16. Skull base tumours part I: Imaging technique, anatomy and anterior skull base tumours

    International Nuclear Information System (INIS)

    Borges, Alexandra

    2008-01-01

    Advances in cross-sectional imaging, surgical technique and adjuvant treatment have largely contributed to ameliorate the prognosis, lessen the morbidity and mortality of patients with skull base tumours and to the growing medical investment in the management of these patients. Because clinical assessment of the skull base is limited, cross-sectional imaging became indispensable in the diagnosis, treatment planning and follow-up of patients with suspected skull base pathology and the radiologist is increasingly responsible for the fate of these patients. This review will focus on the advances in imaging technique; contribution to patient's management and on the imaging features of the most common tumours affecting the anterior skull base. Emphasis is given to a systematic approach to skull base pathology based upon an anatomic division taking into account the major tissue constituents in each skull base compartment. The most relevant information that should be conveyed to surgeons and radiation oncologists involved in patient's management will be discussed

  17. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  18. Two Phase Non-Rigid Multi-Modal Image Registration Using Weber Local Descriptor-Based Similarity Metrics and Normalized Mutual Information

    Directory of Open Access Journals (Sweden)

    Feng Yang

    2013-06-01

    Full Text Available Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI and sum of squared differences (SSD cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD based similarity metrics with the normalized mutual information (NMI using the diffeomorphic free-form deformation (FFD model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD or weighted structural similarity (wldWSSIM. Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI, the SSD on entropy images (ESSD and the ESSD-NMI in terms of registration accuracy and computation efficiency.

  19. Distortion correction algorithm for UAV remote sensing image based on CUDA

    International Nuclear Information System (INIS)

    Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu

    2014-01-01

    In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability

  20. A flexible new method for 3D measurement based on multi-view image sequences

    Science.gov (United States)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  1. Image dissimilarity-based quantification of lung disease from CT

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Loog, Marco; Lo, Pechin

    2010-01-01

    In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space. The classif......In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space...

  2. A poly(dimethylsiloxane)-based device enabling time-lapse imaging with high spatial resolution

    International Nuclear Information System (INIS)

    Hirano, Masahiko; Hoshida, Tetsushi; Sakaue-Sawano, Asako; Miyawaki, Atsushi

    2010-01-01

    We have developed a regulator-free device that enables long-term incubation of mammalian cells for epi-fluorescence imaging, based on a concept that the size of sample to be gassed and heated is reduced to observation scale. A poly(dimethylsiloxane) block stamped on a coverslip works as a long-lasting supplier of CO 2 -rich gas to adjust bicarbonate-containing medium in a tiny chamber at physiological pH, and an oil-immersion objective warms cells across the coverslip. A time-lapse imaging experiment using HeLa cells stably expressing fluorescent cell-cycle indicators showed that the cells in the chamber proliferated with normal cell-cycle period over 2 days.

  3. BEE FORAGE MAPPING BASED ON MULTISPECTRAL IMAGES LANDSAT

    Directory of Open Access Journals (Sweden)

    A. Moskalenko

    2016-10-01

    Full Text Available Possibilities of bee forage identification and mapping based on multispectral images have been shown in the research. Spectral brightness of bee forage has been determined with the use of satellite images. The effectiveness of some methods of image classification for mapping of bee forage is shown. Keywords: bee forage, mapping, multispectral images, image classification.

  4. Preoperative magnetic resonance imaging protocol for endoscopic cranial base image-guided surgery.

    Science.gov (United States)

    Grindle, Christopher R; Curry, Joseph M; Kang, Melissa D; Evans, James J; Rosen, Marc R

    2011-01-01

    Despite the increasing utilization of image-guided surgery, no radiology protocols for obtaining magnetic resonance (MR) imaging of adequate quality are available in the current literature. At our institution, more than 300 endonasal cranial base procedures including pituitary, extended pituitary, and other anterior skullbase procedures have been performed in the past 3 years. To facilitate and optimize preoperative evaluation and assessment, there was a need to develop a magnetic resonance protocol. Retrospective Technical Assessment was performed. Through a collaborative effort between the otolaryngology, neurosurgery, and neuroradiology departments at our institution, a skull base MR image-guided (IGS) protocol was developed with several ends in mind. First, it was necessary to generate diagnostic images useful for the more frequently seen pathologies to improve work flow and limit the expense and inefficiency of case specific MR studies. Second, it was necessary to generate sequences useful for IGS, preferably using sequences that best highlight that lesion. Currently, at our institution, all MR images used for IGS are obtained using this protocol as part of preoperative planning. The protocol that has been developed allows for thin cut precontrast and postcontrast axial cuts that can be used to plan intraoperative image guidance. It also obtains a thin cut T2 axial series that can be compiled separately for intraoperative imaging, or may be fused with computed tomographic images for combined modality. The outlined protocol obtains image sequences effective for diagnostic and operative purposes for image-guided surgery using both T1 and T2 sequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. A data grid for imaging-based clinical trials

    Science.gov (United States)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.

  6. Image fusion between whole body FDG PET images and whole body MRI images using a full-automatic mutual information-based multimodality image registration software

    International Nuclear Information System (INIS)

    Uchida, Yoshitaka; Nakano, Yoshitada; Fujibuchi, Toshiou; Isobe, Tomoko; Kazama, Toshiki; Ito, Hisao

    2006-01-01

    We attempted image fusion between whole body PET and whole body MRI of thirty patients using a full-automatic mutual information (MI) -based multimodality image registration software and evaluated accuracy of this method and impact of the coregistrated imaging on diagnostic accuracy. For 25 of 30 fused images in body area, translating gaps were within 6 mm in all axes and rotating gaps were within 2 degrees around all axes. In head and neck area, considerably much gaps caused by difference of head inclination at imaging occurred in 16 patients, however these gaps were able to decrease by fused separately. In 6 patients, diagnostic accuracy using PET/MRI fused images was superior compared by PET image alone. This work shows that whole body FDG PET images and whole body MRI images can be automatically fused using MI-based multimodality image registration software accurately and this technique can add useful information when evaluating FDG PET images. (author)

  7. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  8. Cultural based preconceptions in aesthetic experience of architecture

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2011-01-01

    Full Text Available On a broader scale, the aim of this paper is to examine theoretically the effects a cultural context has on the aesthetic experience of images existing in perceived reality. Minimalism in architecture, as direct subject of research, is a field of particularities in which we observe functioning of this correlation. Through the experiment with the similarity phenomenon, the paper follows specific manifestations of general formal principles and variability of meaning of minimalism in architecture in limited areas of cultural backgrounds of Serbia and Japan. The goal of the comparative analysis of the examples presented is to indicate the conditions that may lead to a possibly different aesthetic experience in two different cultural contexts. Attribution of different meanings to similar formal visual language of architecture raises questions concerning the system of values, which produces these meanings in their cultural and historical perspectives. The establishment of values can also be affected by preconceptions resulting from association of perceived similarities. Are the preconceptions in aesthetic reception of architecture conditionally affected by pragmatic needs, symbolic archetypes, cultural metaphors based on tradition or ideologically constructed dogmas? Confronting philosophical postulates of the Western and Eastern traditions with the transculturality theory of Wolfgang Welsch, the answers may become more available.

  9. The challenge of on-tissue digestion for MALDI MSI- a comparison of different protocols to improve imaging experiments.

    Science.gov (United States)

    Diehl, Hanna C; Beine, Birte; Elm, Julian; Trede, Dennis; Ahrens, Maike; Eisenacher, Martin; Marcus, Katrin; Meyer, Helmut E; Henkel, Corinna

    2015-03-01

    Mass spectrometry imaging (MSI) has become a powerful and successful tool in the context of biomarker detection especially in recent years. This emerging technique is based on the combination of histological information of a tissue and its corresponding spatial resolved mass spectrometric information. The identification of differentially expressed protein peaks between samples is still the method's bottleneck. Therefore, peptide MSI compared to protein MSI is closer to the final goal of identification since peptides are easier to measure than proteins. Nevertheless, the processing of peptide imaging samples is challenging due to experimental complexity. To address this issue, a method development study for peptide MSI using cryoconserved and formalin-fixed paraffin-embedded (FFPE) rat brain tissue is provided. Different digestion times, matrices, and proteases were tested to define an optimal workflow for peptide MSI. All practical experiments were done in triplicates and analyzed by the SCiLS Lab software, using structures derived from myelin basic protein (MBP) peaks, principal component analysis (PCA) and probabilistic latent semantic analysis (pLSA) to rate the experiments' quality. Blinded experimental evaluation in case of defining countable structures in the datasets was performed by three individuals. Such an extensive method development for peptide matrix-assisted laser desorption/ionization (MALDI) imaging experiments has not been performed so far, and the resulting problems and consequences were analyzed and discussed.

  10. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  11. Image-based modeling of tumor shrinkage in head and neck radiation therapy1

    Science.gov (United States)

    Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569

  12. Fast DCNN based on FWT, intelligent dropout and layer skipping for image retrieval.

    Science.gov (United States)

    ElAdel, Asma; Zaied, Mourad; Amar, Chokri Ben

    2017-11-01

    Deep Convolutional Neural Network (DCNN) can be marked as a powerful tool for object and image classification and retrieval. However, the training stage of such networks is highly consuming in terms of storage space and time. Also, the optimization is still a challenging subject. In this paper, we propose a fast DCNN based on Fast Wavelet Transform (FWT), intelligent dropout and layer skipping. The proposed approach led to improve the image retrieval accuracy as well as the searching time. This was possible thanks to three key advantages: First, the rapid way to compute the features using FWT. Second, the proposed intelligent dropout method is based on whether or not a unit is efficiently and not randomly selected. Third, it is possible to classify the image using efficient units of earlier layer(s) and skipping all the subsequent hidden layers directly to the output layer. Our experiments were performed on CIFAR-10 and MNIST datasets and the obtained results are very promising. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  14. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rybovic, Michala [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: mryb6983@mail.usyd.edu.au; Halkett, Georgia K. [Western Australia Centre for Cancer and Palliative Care, Curtin University of Technology, Health Research Campus, GPO Box U1987, Perth, WA 6845 (Australia)], E-mail: g.halkett@curtin.edu.au; Banati, Richard B. [Faculty of Health Sciences, Brain and Mind Research Institute - Ramaciotti Centre for Brain Imaging, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: r.banati@usyd.edu.au; Cox, Jennifer [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: jenny.cox@usyd.edu.au

    2008-11-15

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour.

  15. Optical image encryption method based on incoherent imaging and polarized light encoding

    Science.gov (United States)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  16. Application of an Image Tracking Algorithm in Fire Ant Motion Experiment

    Directory of Open Access Journals (Sweden)

    Lichuan Gui

    2009-04-01

    Full Text Available An image tracking algorithm, which was originally used with the particle image velocimetry (PIV to determine velocities of buoyant solid particles in water, is modified and applied in the presented work to detect motion of fire ant on a planar surface. A group of fire ant workers are put to the bottom of a tub and excited with vibration of selected frequency and intensity. The moving fire ants are captured with an image system that successively acquires image frames of high digital resolution. The background noise in the imaging recordings is extracted by averaging hundreds of frames and removed from each frame. The individual fire ant images are identified with a recursive digital filter, and then they are tracked between frames according to the size, brightness, shape, and orientation angle of the ant image. The speed of an individual ant is determined with the displacement of its images and the time interval between frames. The trail of the individual fire ant is determined with the image tracking results, and a statistical analysis is conducted for all the fire ants in the group. The purpose of the experiment is to investigate the response of fire ants to the substrate vibration. Test results indicate that the fire ants move faster after being excited, but the number of active ones are not increased even after a strong excitation.

  17. Contrast-based sensorless adaptive optics for retinal imaging.

    Science.gov (United States)

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  18. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  19. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze; Shi, Lihui; Wang, Haoxiang; Meng, Jiandong; Wang, Jim Jing-Yan; Sun, Qingquan; Gu, Yi

    2017-01-01

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  20. Fingerprint Image Enhancement Based on Second Directional Derivative of the Digital Image

    Directory of Open Access Journals (Sweden)

    Onnia Vesa

    2002-01-01

    Full Text Available This paper presents a novel approach of fingerprint image enhancement that relies on detecting the fingerprint ridges as image regions where the second directional derivative of the digital image is positive. A facet model is used in order to approximate the derivatives at each image pixel based on the intensity values of pixels located in a certain neighborhood. We note that the size of this neighborhood has a critical role in achieving accurate enhancement results. Using neighborhoods of various sizes, the proposed algorithm determines several candidate binary representations of the input fingerprint pattern. Subsequently, an output binary ridge-map image is created by selecting image zones, from the available binary image candidates, according to a MAP selection rule. Two public domain collections of fingerprint images are used in order to objectively assess the performance of the proposed fingerprint image enhancement approach.

  1. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    International Nuclear Information System (INIS)

    Dong Bing; Ren Deqing; Zhang Xi

    2011-01-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10 -3 to 10 -4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  2. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    Science.gov (United States)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative

  3. DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION

    Directory of Open Access Journals (Sweden)

    B. Ruf

    2017-08-01

    Full Text Available With the emergence of small consumer Unmanned Aerial Vehicles (UAVs, the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM optimization which is parallelized for general purpose computation on a GPU (GPGPU, reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that

  4. Bin Ratio-Based Histogram Distances and Their Application to Image Classification.

    Science.gov (United States)

    Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen

    2014-12-01

    Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.

  5. Toward a Low-Cost System for High-Throughput Image-Based Phenotyping of Root System Architecture

    Science.gov (United States)

    Davis, T. W.; Schneider, D. J.; Cheng, H.; Shaw, N.; Kochian, L. V.; Shaff, J. E.

    2015-12-01

    Root system architecture is being studied more closely for improved nutrient acquisition, stress tolerance and carbon sequestration by relating the genetic material that corresponds to preferential physical features. This information can help direct plant breeders in addressing the growing concerns regarding the global demand on crops and fossil fuels. To help support this incentive comes a need to make high-throughput image-based phenotyping of plant roots, at the individual plant scale, simpler and more affordable. Our goal is to create an affordable and portable product for simple image collection, processing and management that will extend root phenotyping to institutions with limited funding (e.g., in developing countries). Thus, a new integrated system has been developed using the Raspberry Pi single-board computer. Similar to other 3D-based imaging platforms, the system utilizes a stationary camera to photograph a rotating crop root system (e.g., rice, maize or sorghum) that is suspended either in a gel or on a mesh (for hydroponics). In contrast, the new design takes advantage of powerful open-source hardware and software to reduce the system costs, simplify the imaging process, and manage the large datasets produced by the high-resolution photographs. A newly designed graphical user interface (GUI) unifies the system controls (e.g., adjusting camera and motor settings and orchestrating the motor motion with image capture), making it easier to accommodate a variety of experiments. During each imaging session, integral metadata necessary for reproducing experiment results are collected (e.g., plant type and age, growing conditions and treatments, camera settings) using hierarchical data format files. These metadata are searchable within the GUI and can be selected and extracted for further analysis. The GUI also supports an image previewer that performs limited image processing (e.g., thresholding and cropping). Root skeletonization, 3D reconstruction and

  6. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  7. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  8. Infrared Imaging for Inquiry-Based Learning

    Science.gov (United States)

    Xie, Charles; Hazzard, Edmund

    2011-01-01

    Based on detecting long-wavelength infrared (IR) radiation emitted by the subject, IR imaging shows temperature distribution instantaneously and heat flow dynamically. As a picture is worth a thousand words, an IR camera has great potential in teaching heat transfer, which is otherwise invisible. The idea of using IR imaging in teaching was first…

  9. A region-based segmentation method for ultrasound images in HIFU therapy

    International Nuclear Information System (INIS)

    Zhang, Dong; Liu, Yu; Yang, Yan; Xu, Menglong; Yan, Yu; Qin, Qianqing

    2016-01-01

    information about the tumor position, shape, and size. Additionally, an appropriate cluster number for spectral clustering can be determined by the same algorithm, thus the automatic segmentation of the tumor region is achieved. Results: To evaluate the performance of the proposed method, 50 uterine fibroid ultrasound images from different patients receiving HIFU therapy were segmented, and the obtained tumor contours were compared with those delineated by an experienced radiologist. For area-based evaluation results, the mean values of the true positive ratio, the false positive ratio, and the similarity were 94.42%, 4.71%, and 90.21%, respectively, and the corresponding standard deviations were 2.54%, 3.12%, and 3.50%, respectively. For distance-based evaluation results, the mean values of the normalized Hausdorff distance and the normalized mean absolute distance were 4.93% and 0.90%, respectively, and the corresponding standard deviations were 2.22% and 0.34%, respectively. The running time of the segmentation process was 12.9 s for a 318 × 333 (pixels) image. Conclusions: Experiments show that the proposed method can segment the tumor region accurately and efficiently with less manual intervention, which provides for the possibility of automatic segmentation and real-time guidance in HIFU therapy.

  10. A region-based segmentation method for ultrasound images in HIFU therapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Dong, E-mail: dongz@whu.edu.cn; Liu, Yu; Yang, Yan; Xu, Menglong; Yan, Yu [School of Physics and Technology, Wuhan University, Wuhan 430072 (China); Qin, Qianqing [State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072 (China)

    2016-06-15

    information about the tumor position, shape, and size. Additionally, an appropriate cluster number for spectral clustering can be determined by the same algorithm, thus the automatic segmentation of the tumor region is achieved. Results: To evaluate the performance of the proposed method, 50 uterine fibroid ultrasound images from different patients receiving HIFU therapy were segmented, and the obtained tumor contours were compared with those delineated by an experienced radiologist. For area-based evaluation results, the mean values of the true positive ratio, the false positive ratio, and the similarity were 94.42%, 4.71%, and 90.21%, respectively, and the corresponding standard deviations were 2.54%, 3.12%, and 3.50%, respectively. For distance-based evaluation results, the mean values of the normalized Hausdorff distance and the normalized mean absolute distance were 4.93% and 0.90%, respectively, and the corresponding standard deviations were 2.22% and 0.34%, respectively. The running time of the segmentation process was 12.9 s for a 318 × 333 (pixels) image. Conclusions: Experiments show that the proposed method can segment the tumor region accurately and efficiently with less manual intervention, which provides for the possibility of automatic segmentation and real-time guidance in HIFU therapy.

  11. A novel image encryption algorithm based on chaos maps with Markov properties

    Science.gov (United States)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  12. Multispectral image pansharpening based on the contourlet transform

    Energy Technology Data Exchange (ETDEWEB)

    Amro, Israa; Mateos, Javier, E-mail: iamro@correo.ugr.e, E-mail: jmd@decsai.ugr.e [Departamento de Ciencias de la Computacion e I.A., Universidad de Granada, 18071 Granada (Spain)

    2010-02-01

    Pansharpening is a technique that fuses the information of a low resolution multispectral image (MS) and a high resolution panchromatic image (PAN), usually remote sensing images, to provide a high resolution multispectral image. In the literature, this task has been addressed from different points of view being one of the most popular the wavelets based algorithms. Recently, the contourlet transform has been proposed. This transform combines the advantages of the wavelets transform with a more efficient directional information representation. In this paper we propose a new pansharpening method based on contourlets, compare with its wavelet counterpart and assess its performance numerically and visually.

  13. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  14. Wiener discrete cosine transform-based image filtering

    Science.gov (United States)

    Pogrebnyak, Oleksiy; Lukin, Vladimir V.

    2012-10-01

    A classical problem of additive white (spatially uncorrelated) Gaussian noise suppression in grayscale images is considered. The main attention is paid to discrete cosine transform (DCT)-based denoising, in particular, to image processing in blocks of a limited size. The efficiency of DCT-based image filtering with hard thresholding is studied for different sizes of overlapped blocks. A multiscale approach that aggregates the outputs of DCT filters having different overlapped block sizes is proposed. Later, a two-stage denoising procedure that presumes the use of the multiscale DCT-based filtering with hard thresholding at the first stage and a multiscale Wiener DCT-based filtering at the second stage is proposed and tested. The efficiency of the proposed multiscale DCT-based filtering is compared to the state-of-the-art block-matching and three-dimensional filter. Next, the potentially reachable multiscale filtering efficiency in terms of output mean square error (MSE) is studied. The obtained results are of the same order as those obtained by Chatterjee's approach based on nonlocal patch processing. It is shown that the ideal Wiener DCT-based filter potential is usually higher when noise variance is high.

  15. [PACS-based endoscope image acquisition workstation].

    Science.gov (United States)

    Liu, J B; Zhuang, T G

    2001-01-01

    A practical PACS-based Endoscope Image Acquisition Workstation is here introduced. By a Multimedia Video Card, the endoscope video is digitized and captured dynamically or statically into computer. This workstation realizes a variety of functions such as the endoscope video's acquisition and display, as well as the editing, processing, managing, storage, printing, communication of related information. Together with other medical image workstation, it can make up the image sources of PACS for hospitals. In addition, it can also act as an independent endoscopy diagnostic system.

  16. Alignment of CT images of skull dysmorphology using anatomy-based perpendicular axes

    International Nuclear Information System (INIS)

    Yoo, Sun K; Kim, Yong O; Kim, Hee-Joung; Kim, Nam H; Jang, Young Beom; Kim, Kee-Deog; Lee, Hye-Yeon

    2003-01-01

    Rigid body registration of 3D CT scans, based on manual identification of homologous landmarks, is useful for the visual analysis of skull dysmorphology. In this paper, a robust and simple alignment method was proposed to allow for the comparison of skull morphologies, within and between individuals with craniofacial anomalies, based on 3D CT scans, and the minimum number of anatomical landmarks, under rigidity and uniqueness constraints. Three perpendicular axes, extracted from anatomical landmarks, define the absolute coordinate system, through a rigid body transformation, to align multiple CT images for different patients and acquisition times. The accuracy of the alignment method depends on the accuracy of the localized landmarks and target points. The numerical simulation generalizes the accuracy requirements of the alignment method. Experiments using a human dried skull specimen, and ten sets of skull CT images (the pre- and post-operative CT scans of four plagiocephaly, and one fibrous dysplasia patients), demonstrated the feasibility of the technique in clinical practice

  17. Region-Based Color Image Indexing and Retrieval

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper a region-based color image indexing and retrieval algorithm is presented. As a basis for the indexing, a novel K-Means segmentation algorithm is used, modified so as to take into account the coherence of the regions. A new color distance is also defined for this algorithm. Based on ....... Experimental results demonstrate the performance of the algorithm. The development of an intelligent image content-based search engine for the World Wide Web is also presented, as a direct application of the presented algorithm....

  18. Tumor recognition in wireless capsule endoscopy images using textural features and SVM-based feature selection.

    Science.gov (United States)

    Li, Baopu; Meng, Max Q-H

    2012-05-01

    Tumor in digestive tract is a common disease and wireless capsule endoscopy (WCE) is a relatively new technology to examine diseases for digestive tract especially for small intestine. This paper addresses the problem of automatic recognition of tumor for WCE images. Candidate color texture feature that integrates uniform local binary pattern and wavelet is proposed to characterize WCE images. The proposed features are invariant to illumination change and describe multiresolution characteristics of WCE images. Two feature selection approaches based on support vector machine, sequential forward floating selection and recursive feature elimination, are further employed to refine the proposed features for improving the detection accuracy. Extensive experiments validate that the proposed computer-aided diagnosis system achieves a promising tumor recognition accuracy of 92.4% in WCE images on our collected data.

  19. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  20. Terrain reconstruction based on descent images for the Chang’e III landing area

    Directory of Open Access Journals (Sweden)

    Xu Xinchao

    2015-10-01

    Full Text Available A new method that combined image matching and shape from shading for terrain reconstruction was proposed to solve the lack of terrain in the landing area of Chang'e III. First, the reflection equation was established based on the Lommel– Seeliger reflection model. After edge extraction, the gradients of points on the edge were solved. The normal vectors of adjacent points were obtained using the smoothness constraint. Furthermore, the gradients of residual points in the image were determined through evolution. The inadequacy of the reflection equation was eliminated by considering the gradient as the constraint of the reflection equation. The normal vector of each point could be obtained by solving the reflection equation. The terrain without coordinate information was reconstructed by iterating the vector field. After using scaleinvariant feature transform to extract matching points in the descent images, the terrain was converted to a lander centroid coordinate system. Experiments were carried out with MATLAB-simulated images, laboratory images, and descent images of Chang'e III. Results show that the proposed method performs better than the classical SFS algorithm. The new method can provide reference for other deep space exploration activities.