WorldWideScience

Sample records for image fusion predicts

  1. Detecting Weather Radar Clutter by Information Fusion With Satellite Images and Numerical Weather Prediction Model Output

    DEFF Research Database (Denmark)

    Bøvith, Thomas; Nielsen, Allan Aasbjerg; Hansen, Lars Kai

    2006-01-01

    A method for detecting clutter in weather radar images by information fusion is presented. Radar data, satellite images, and output from a numerical weather prediction model are combined and the radar echoes are classified using supervised classification. The presented method uses indirect...... information on precipitation in the atmosphere from Meteosat-8 multispectral images and near-surface temperature estimates from the DMI-HIRLAM-S05 numerical weather prediction model. Alternatively, an operational nowcasting product called 'Precipitating Clouds' based on Meteosat-8 input is used. A scale...

  2. Remote sensing image fusion

    CERN Document Server

    Alparone, Luciano; Baronti, Stefano; Garzelli, Andrea

    2015-01-01

    A synthesis of more than ten years of experience, Remote Sensing Image Fusion covers methods specifically designed for remote sensing imagery. The authors supply a comprehensive classification system and rigorous mathematical description of advanced and state-of-the-art methods for pansharpening of multispectral images, fusion of hyperspectral and panchromatic images, and fusion of data from heterogeneous sensors such as optical and synthetic aperture radar (SAR) images and integration of thermal and visible/near-infrared images. They also explore new trends of signal/image processing, such as

  3. Medical Image Fusion

    Directory of Open Access Journals (Sweden)

    Mitra Rafizadeh

    2007-08-01

    Full Text Available Technological advances in medical imaging in the past two decades have enable radiologists to create images of the human body with unprecedented resolution. MRI, PET,... imaging devices can quickly acquire 3D images. Image fusion establishes an anatomical correlation between corresponding images derived from different examination. This fusion is applied either to combine images of different modalities (CT, MRI or single modality (PET-PET."nImage fusion is performed in two steps:"n1 Registration: spatial modification (eg. translation of model image relative to reference image in order to arrive at an ideal matching of both images. Registration methods are feature-based and intensity-based approaches."n2 Visualization: the goal of it is to depict the spatial relationship between the model image and refer-ence image. We can point out its clinical application in nuclear medicine (PET/CT.

  4. Iterative guided image fusion

    Directory of Open Access Journals (Sweden)

    Alexander Toet

    2016-08-01

    Full Text Available We propose a multi-scale image fusion scheme based on guided filtering. Guided filtering can effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multi-scale fusion process. First, size-selective iterative guided filtering is applied to decompose the source images into approximation and residual layers at multiple spatial scales. Then, frequency-tuned filtering is used to compute saliency maps at successive spatial scales. Next, at each spatial scale binary weighting maps are obtained as the pixelwise maximum of corresponding source saliency maps. Guided filtering of the binary weighting maps with their corresponding source images as guidance images serves to reduce noise and to restore spatial consistency. The final fused image is obtained as the weighted recombination of the individual residual layers and the mean of the approximation layers at the coarsest spatial scale. Application to multiband visual (intensified and thermal infrared imagery demonstrates that the proposed method obtains state-of-the-art performance for the fusion of multispectral nightvision images. The method has a simple implementation and is computationally efficient.

  5. A relaxed fusion of information from real and synthetic images to predict complex behavior

    Science.gov (United States)

    Lyons, Damian M.; Benjamin, D. Paul

    2011-05-01

    An important component of cognitive robotics is the ability to mentally simulate physical processes and to compare the expected results with the information reported by a robot's sensors. In previous work, we have proposed an approach that integrates a 3D game-engine simulation into the robot control architecture. A key part of that architecture is the Match-Mediated Difference (MMD) operation, an approach to fusing sensory data and synthetic predictions at the image level. The MMD operation insists that simulated and predicted scenes are similar in terms of the appearance of the objects in the scene. This is an overly restrictive constraint on the simulation since parts of the predicted scene may not have been previously viewed by the robot. In this paper we propose an extended MMD operation that relaxes the constraint and allows the real and synthetic scenes to differ in some features but not in (selected) other features. Image difference operations that allow a real image and synthetic image generated from an arbitrarily colored graphical model of a scene to be compared. Scenes with the same content show a zero difference. Scenes with varying foreground objects can be controlled to compare the color, size and shape of the foreground.

  6. The role of imaging based prostate biopsy morphology in a data fusion paradigm for transducing prognostic predictions

    Science.gov (United States)

    Khan, Faisal M.; Kulikowski, Casimir A.

    2016-03-01

    A major focus area for precision medicine is in managing the treatment of newly diagnosed prostate cancer patients. For patients with a positive biopsy, clinicians aim to develop an individualized treatment plan based on a mechanistic understanding of the disease factors unique to each patient. Recently, there has been a movement towards a multi-modal view of the cancer through the fusion of quantitative information from multiple sources, imaging and otherwise. Simultaneously, there have been significant advances in machine learning methods for medical prognostics which integrate a multitude of predictive factors to develop an individualized risk assessment and prognosis for patients. An emerging area of research is in semi-supervised approaches which transduce the appropriate survival time for censored patients. In this work, we apply a novel semi-supervised approach for support vector regression to predict the prognosis for newly diagnosed prostate cancer patients. We integrate clinical characteristics of a patient's disease with imaging derived metrics for biomarker expression as well as glandular and nuclear morphology. In particular, our goal was to explore the performance of nuclear and glandular architecture within the transduction algorithm and assess their predictive power when compared with the Gleason score manually assigned by a pathologist. Our analysis in a multi-institutional cohort of 1027 patients indicates that not only do glandular and morphometric characteristics improve the predictive power of the semi-supervised transduction algorithm; they perform better when the pathological Gleason is absent. This work represents one of the first assessments of quantitative prostate biopsy architecture versus the Gleason grade in the context of a data fusion paradigm which leverages a semi-supervised approach for risk prognosis.

  7. COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION

    Directory of Open Access Journals (Sweden)

    Preema Mole

    2016-07-01

    Full Text Available The availability of imaging sensors operating in multiple spectral bands has led to the requirement of image fusion algorithms that would combine the image from these sensors in an efficient way to give an image that is more perceptible to human eye. Multispectral Image fusion is the process of combining images optically acquired in more than one spectral band. In this paper, we present a pixel-level image fusion that combines four images from four different spectral bands namely near infrared(0.76-0.90um, mid infrared(1.55-1.75um,thermal- infrared(10.4-12.5um and mid infrared(2.08-2.35um to give a composite colour image. The work coalesces a fusion technique that involves linear transformation based on Cholesky decomposition of the covariance matrix of source data that converts multispectral source images which are in grayscale into colour image. This work is composed of different segments that includes estimation of covariance matrix of images, cholesky decomposition and transformation ones. Finally, the fused colour image is compared with the fused image obtained by PCA transformation.

  8. Image fusion theories, techniques and applications

    CERN Document Server

    Mitchell, HB

    2010-01-01

    This text provides a comprehensive introduction to the theories, techniques and applications of image fusion. It examines in detail many real-life examples of image fusion, including panchromatic sharpening and ensemble color image segmentation.

  9. Hybrid ultrasound imaging techniques (fusion imaging).

    Science.gov (United States)

    Sandulescu, Daniela Larisa; Dumitrescu, Daniela; Rogoveanu, Ion; Saftoiu, Adrian

    2011-01-07

    Visualization of tumor angiogenesis can facilitate non-invasive evaluation of tumor vascular characteristics to supplement the conventional diagnostic imaging goals of depicting tumor location, size, and morphology. Hybrid imaging techniques combine anatomic [ultrasound, computed tomography (CT), and/or magnetic resonance imaging (MRI)] and molecular (single photon emission CT and positron emission tomography) imaging modalities. One example is real-time virtual sonography, which combines ultrasound (grayscale, colour Doppler, or dynamic contrast harmonic imaging) with contrast-enhanced CT/MRI. The benefits of fusion imaging include an increased diagnostic confidence, direct comparison of the lesions using different imaging modalities, more precise monitoring of interventional procedures, and reduced radiation exposure.

  10. An Ultrasound Image-Based Dynamic Fusion Modeling Method for Predicting the Quantitative Impact of In Vivo Liver Motion on Intraoperative HIFU Therapies: Investigations in a Porcine Model.

    Directory of Open Access Journals (Sweden)

    W Apoutou N'Djin

    Full Text Available Organ motion is a key component in the treatment of abdominal tumors by High Intensity Focused Ultrasound (HIFU, since it may influence the safety, efficacy and treatment time. Here we report the development in a porcine model of an Ultrasound (US image-based dynamic fusion modeling method for predicting the effect of in vivo motion on intraoperative HIFU treatments performed in the liver in conjunction with surgery. A speckle tracking method was used on US images to quantify in vivo liver motions occurring intraoperatively during breathing and apnea. A fusion modeling of HIFU treatments was implemented by merging dynamic in vivo motion data in a numerical modeling of HIFU treatments. Two HIFU strategies were studied: a spherical focusing delivering 49 juxtapositions of 5-second HIFU exposures and a toroidal focusing using 1 single 40-second HIFU exposure. Liver motions during breathing were spatially homogenous and could be approximated to a rigid motion mainly encountered in the cranial-caudal direction (f = 0.20 Hz, magnitude > 13 mm. Elastic liver motions due to cardiovascular activity, although negligible, were detectable near millimeter-wide sus-hepatic veins (f = 0.96 Hz, magnitude 75%. Fusion modeling predictions were preliminarily validated in vivo and showed the potential of using a long-duration toroidal HIFU exposure to accelerate the ablation process during breathing (from 0.5 to 6 cm3 · min(-1. To improve HIFU treatment control, dynamic fusion modeling may be interesting for assessing numerically focusing strategies and motion compensation techniques in more realistic conditions.

  11. Region-Based Image-Fusion Framework for Compressive Imaging

    Directory of Open Access Journals (Sweden)

    Yang Chen

    2014-01-01

    Full Text Available A novel region-based image-fusion framework for compressive imaging (CI and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality.

  12. Hybrid ultrasound imaging techniques(fusion imaging)

    Institute of Scientific and Technical Information of China (English)

    Daniela Larisa Sandulescu; Daniela Dumitrescu; Ion Rogoveanu; Adrian Saftoiu

    2011-01-01

    Visualization of tumor angiogenesis can facilitate noninvasive evaluation of tumor vascular characteristics to supplement the conventional diagnostic imaging goals of depicting tumor location,size,and morphology.Hybrid imaging techniques combine anatomic [ultrasound,computed tomography(CT),and/or magnetic resonance imaging(MRI)] and molecular(single photon emission CT and positron emission tomography)imaging modalities.One example is real-time virtual sonography,which combines ultrasound(grayscale,colour Doppler,or dynamic contrast harmonic imaging)with contrast-enhanced CT/MRI.The benefits of fusion imaging include an increased diagnostic confidence,direct comparison of the lesions using different imaging modalities,more precise monitoring of interventional procedures,and reduced radiation exposure.

  13. IMAGE ENHANCEMENT USING IMAGE FUSION AND IMAGE PROCESSING TECHNIQUES

    OpenAIRE

    Arjun Nelikanti

    2015-01-01

    Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. Appropriate choice of such techniques is greatly influenced by the imaging modality, task at hand and viewing conditions. This paper will provide a combination of two concepts, image fusion by DWT and digital image processing techniques. The e...

  14. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  15. Gradient-based compressive image fusion

    Institute of Scientific and Technical Information of China (English)

    Yang CHEN‡; Zheng QIN

    2015-01-01

    We present a novel image fusion scheme based on gradient and scrambled block Hadamard ensemble (SBHE) sam-pling for compressive sensing imaging. First, source images are compressed by compressive sensing, to facilitate the transmission of the sensor. In the fusion phase, the image gradient is calculated to reflect the abundance of its contour information. By com-positing the gradient of each image, gradient-based weights are obtained, with which compressive sensing coefficients are achieved. Finally, inverse transformation is applied to the coefficients derived from fusion, and the fused image is obtained. Information entropy (IE), Xydeas’s and Piella’s metrics are applied as non-reference objective metrics to evaluate the fusion quality in line with different fusion schemes. In addition, different image fusion application scenarios are applied to explore the scenario adaptability of the proposed scheme. Simulation results demonstrate that the gradient-based scheme has the best per-formance, in terms of both subjective judgment and objective metrics. Furthermore, the gradient-based fusion scheme proposed in this paper can be applied in different fusion scenarios.

  16. Multisensor image fusion guidelines in remote sensing

    Science.gov (United States)

    Pohl, C.

    2016-04-01

    Remote sensing delivers multimodal and -temporal data from the Earth's surface. In order to cope with these multidimensional data sources and to make the most of them, image fusion is a valuable tool. It has developed over the past few decades into a usable image processing technique for extracting information of higher quality and reliability. As more sensors and advanced image fusion techniques have become available, researchers have conducted a vast amount of successful studies using image fusion. However, the definition of an appropriate workflow prior to processing the imagery requires knowledge in all related fields - i.e. remote sensing, image fusion and the desired image exploitation processing. From the findings of this research it can be seen that the choice of the appropriate technique, as well as the fine-tuning of the individual parameters of this technique, is crucial. There is still a lack of strategic guidelines due to the complexity and variability of data selection, processing techniques and applications. This paper gives an overview on the state-of-the-art in remote sensing image fusion including sensors and applications. Putting research results in image fusion from the past 15 years into a context provides a new view on the subject and helps other researchers to build their innovation on these findings. Recommendations of experts help to understand further needs to achieve feasible strategies in remote sensing image fusion.

  17. Image Pixel Fusion for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present a technique for fusion of optical and thermal face images based on image pixel fusion approach. Out of several factors, which affect face recognition performance in case of visual images, illumination changes are a significant factor that needs to be addressed. Thermal images are better in handling illumination conditions but not very consistent in capturing texture details of the faces. Other factors like sunglasses, beard, moustache etc also play active role in adding complicacies to the recognition process. Fusion of thermal and visual images is a solution to overcome the drawbacks present in the individual thermal and visual face images. Here fused images are projected into an eigenspace and the projected images are classified using a radial basis function (RBF) neural network and also by a multi-layer perceptron (MLP). In the experiments Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal and visual face images have been used. Compar...

  18. Image fusion algorithm using nonsubsampled contourlet transform

    Science.gov (United States)

    Xiao, Yang; Cao, Zhiguo; Wang, Kai; Xu, Zhengxiang

    2007-11-01

    In this paper, a pixel-level image fusion algorithm based on Nonsubsampled Contourlet Transform (NSCT) has been proposed. Compared with Contourlet Transform, NSCT is redundant, shift-invariant and more suitable for image fusion. Each image from different sensors could be decomposed into a low frequency image and a series of high frequency images of different directions by multi-scale NSCT. For low and high frequency images, they are fused based on local-contrast enhancement and definition respectively. Finally, fused image is reconstructed from low and high frequency fused images. Experiment demonstrates that NSCT could preserve edge significantly and the fusion rule based on region segmentation performances well in local-contrast enhancement.

  19. Image Fusion Using Pca in Cs Domain

    Directory of Open Access Journals (Sweden)

    M. T. Sadeghi

    2012-09-01

    Full Text Available Compressive sampling (CS, also called Compressed Sensing, has generated a tremendous amount of excitement in the image processing community. It provides an alternative to Shannon/Nyquist sampling when the signal under acquisition is known to be sparse or compressible. In this paper, we propose a new efficient image fusion method for compressed sensing imaging. In this method, we calculate the twodimensional discrete cosine transform of multiple input images, these achieved measurements are multiplied with sampling filter, so compressed images are obtained. we take inverse discrete cosine transform of them. Finally, fused image achieves from these results by using PCA fusion method. This approach also is implemented for multi-focus and noisy images. Simulation results show that our methodprovides promising fusion performance in both visual comparison and comparison using objective measures. Moreover, because this method does not need to recovery process the computational time is decreased very much.

  20. Region-based multisensor image fusion method

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Image fusion should consider the priori knowledge of the source images to be fused, such as the characteristics of the images and the goal of image fusion, that is to say, the knowledge about the input data and the application plays a crucial role. This paper is concerned on multiresolution (MR) image fusion. Considering the characteristics of the multisensor (SAR and FLIR etc) and the goal of fusion, which is to achieve one image in possession of the contours feature and the target region feature. It seems more meaningful to combine features rather than pixels. A multisensor image fusion scheme based on K-means cluster and steerable pyramid is presented. K-means cluster is used to segment out objects in FLIR images. The steerable pyramid is a multiresolution analysis method, which has a good property to extract contours information at different scales. Comparisons are made with the relevant existing techniques in the literature. The paper concludes with some examples to illustrate the efficiency of the proposed scheme.

  1. Bayesian Fusion of Multi-Band Images

    CERN Document Server

    Wei, Qi; Tourneret, Jean-Yves

    2013-01-01

    In this paper, a Bayesian fusion technique for remotely sensed multi-band images is presented. The observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics. The fusion problem is formulated within a Bayesian estimation framework. An appropriate prior distribution exploiting geometrical consideration is introduced. To compute the Bayesian estimator of the scene of interest from its posterior distribution, a Markov chain Monte Carlo algorithm is designed to generate samples asymptotically distributed according to the target distribution. To efficiently sample from this high-dimension distribution, a Hamiltonian Monte Carlo step is introduced in the Gibbs sampling strategy. The efficiency of the proposed fusion method is evaluated with respect to several state-of-the-art fusion techniques. In particular, low spatial resolution hyperspectral and mult...

  2. Image Fusion Techniques for Multispectral Palm Image Enhancement

    OpenAIRE

    Rajashree Bhokare; Deepali Sale; Dr. (Mrs. ) M. A. Joshi; Dr. M. S. Gaikwad

    2013-01-01

    We proposed the multispectral image enhancement through image fusion by combining the data from the multiple spectrum to address the problem of accuracy and make the system robust against spoofing and to improve the accuracy of recognition, using more discriminating of palm images. Palm line features are clearer in the blue and green bands while red band can reveal some palm vein structure. The NIR band can show the palm vein structure as well as partial line information. Image fusion improve...

  3. Slantlet Transform for Multispectral Image Fusion

    Directory of Open Access Journals (Sweden)

    Adnan H.M. Al-Helali

    2009-01-01

    Full Text Available Problem statement: Image fusion is a process by which multispectral and panchromatic images, or some of their features, are combined together to form a high spatial/high spectral resolutions image. The successful fusion of images acquired from different modalities or instruments is a great importance issue in remote sensing applications. Approach: A new method of image fusion was introduced. It was based on a hybrid transform, which is an extension of Ridgelet transform. It used the slantlet transform instead of wavelet transform in the final steps of Ridgelet transform. The slantlet transform was an orthogonal discrete wavelet transform with two zero moments and with improved time localization. Results: Since edges and noise played fundamental role in image understanding, this hybrid transform was proved to be good way to enhance the edges and reduce the noise. Conclusion: The proposed method of fusion presented richer information in spatial and spectral domains simultaneously as well as it had reached an optimum fusion result.

  4. Image fusion techniques in permanent seed implantation

    Directory of Open Access Journals (Sweden)

    Alfredo Polo

    2010-10-01

    Full Text Available Over the last twenty years major software and hardware developments in brachytherapy treatment planning, intraoperative navigation and dose delivery have been made. Image-guided brachytherapy has emerged as the ultimate conformal radiation therapy, allowing precise dose deposition on small volumes under direct image visualization. In thisprocess imaging plays a central role and novel imaging techniques are being developed (PET, MRI-MRS and power Doppler US imaging are among them, creating a new paradigm (dose-guided brachytherapy, where imaging is used to map the exact coordinates of the tumour cells, and to guide applicator insertion to the correct position. Each of these modalities has limitations providing all of the physical and geometric information required for the brachytherapy workflow.Therefore, image fusion can be used as a solution in order to take full advantage of the information from each modality in treatment planning, intraoperative navigation, dose delivery, verification and follow-up of interstitial irradiation.Image fusion, understood as the visualization of any morphological volume (i.e. US, CT, MRI together with an additional second morpholo gical volume (i.e. CT, MRI or functional dataset (functional MRI, SPECT, PET, is a well known method for treatment planning, verification and follow-up of interstitial irradiation. The term image fusion is used when multiple patient image datasets are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality taken at different moments (multi-temporalapproach, or by combining information from multiple modalities. Quality means that the fused images should provide additional information to the brachythe rapy process (diagnosis and staging, treatment planning, intraoperative imaging, treatment delivery and follow-up that cannot be obtained in other ways. In this review I will focus on the role of

  5. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...

  6. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    Science.gov (United States)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  7. Multispectral Image Enhancement Through Adaptive Wavelet Fusion

    Science.gov (United States)

    2017-02-08

    Filtering. PeerJ Computer Science, 2, e72. doi: 10.7717/peerj-cs.72. https://peerj.com/articles/cs-72/ 6 Coloring multiband night vision images...decompose the source images into base and detail layers at multiple levels of resolution. Then, frequency-tuned filtering is used to compute saliency...obtains state-of-the-art performance for the fusion of multispectral night vision images. The method has a simple implementation and is computationally

  8. Image Fusion for Travel Time Tomography Inversion

    Directory of Open Access Journals (Sweden)

    Liu Linan

    2015-09-01

    Full Text Available The travel time tomography technology had achieved wide application, the hinge of tomography was inversion algorithm, the ray path tracing technology had a great impact on the inversion results. In order to improve the SNR of inversion image, comprehensive utilization of inversion results with different ray tracing can be used. We presented an imaging fusion method based on improved Wilkinson iteration method. Firstly, the shortest path method and the linear travel time interpolation were used for forward calculation; then combined the improved Wilkinson iteration method with super relaxation precondition method to reduce the condition number of matrix and accelerate iterative speed, the precise integration method was used to solve the inverse matrix more precisely in tomography inversion process; finally, use wavelet transform for image fusion, obtain the final image. Therefore, the ill-conditioned linear equations were changed into iterative normal system through two times of treatment and using images with different forward algorithms for image fusion, it reduced the influence effect of measurement error on imaging. Simulation results showed that, this method can eliminate the artifacts in images effectively, it had extensive practical significance.

  9. Color image fusion for concealed weapon detection

    Science.gov (United States)

    Toet, Alexander

    2003-09-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.

  10. Histology image search using multimodal fusion.

    Science.gov (United States)

    Caicedo, Juan C; Vanegas, Jorge A; Páez, Fabian; González, Fabio A

    2014-10-01

    This work proposes a histology image indexing strategy based on multimodal representations obtained from the combination of visual features and associated semantic annotations. Both data modalities are complementary information sources for an image retrieval system, since visual features lack explicit semantic information and semantic terms do not usually describe the visual appearance of images. The paper proposes a novel strategy to build a fused image representation using matrix factorization algorithms and data reconstruction principles to generate a set of multimodal features. The methodology can seamlessly recover the multimodal representation of images without semantic annotations, allowing us to index new images using visual features only, and also accepting single example images as queries. Experimental evaluations on three different histology image data sets show that our strategy is a simple, yet effective approach to building multimodal representations for histology image search, and outperforms the response of the popular late fusion approach to combine information.

  11. Application of image fusion techniques in DSA

    Science.gov (United States)

    Ye, Feng; Wu, Jian; Cui, Zhiming; Xu, Jing

    2007-12-01

    Digital subtraction angiography (DSA) is an important technology in both medical diagnoses and interposal therapy, which can eliminate the interferential background and give prominence to blood vessels by computer processing. After contrast material is injected into an artery or vein, a physician produces fluoroscopic images. Using these digitized images, a computer subtracts the image made with contrast material from a series of post injection images made without background information. By analyzing the characteristics of DSA medical images, this paper provides a solution of image fusion which is in allusion to the application of DSA subtraction. We fuse the images of angiogram and subtraction, in order to obtain the new image which has more data information. The image that fused by wavelet transform can display the blood vessels and background information clearly, and medical experts gave high score on the effect of it.

  12. Multispectral image fusion based on fractal features

    Science.gov (United States)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  13. The IHS Transformations Based Image Fusion

    CERN Document Server

    Al-Wassai, Firouz Abdullah; Al-Zuky, Ali A

    2011-01-01

    The IHS sharpening technique is one of the most commonly used techniques for sharpening. Different transformations have been developed to transfer a color image from the RGB space to the IHS space. Through literature, it appears that, various scientists proposed alternative IHS transformations and many papers have reported good results whereas others show bad ones as will as not those obtained which the formula of IHS transformation were used. In addition to that, many papers show different formulas of transformation matrix such as IHS transformation. This leads to confusion what is the exact formula of the IHS transformation?. Therefore, the main purpose of this work is to explore different IHS transformation techniques and experiment it as IHS based image fusion. The image fusion performance was evaluated, in this study, using various methods to estimate the quality and degree of information improvement of a fused image quantitatively.

  14. Spectrally Consistent Satellite Image Fusion with Improved Image Priors

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.;

    2006-01-01

    Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....

  15. Investigation of Image Fusion Between High-Resolution Image and Multi-spectral Image

    Institute of Scientific and Technical Information of China (English)

    LI Pingxiang; WANG Zhijun

    2003-01-01

    On the basis of a thorough understanding of the physical characteristics of remote sensing image, this paper employs the theories of wavelet transform and signal sampling to develop a new image fusion algorithm. The algorithm has been successfully applied to the image fusion of SPOT PAN and TM of Guangdong province, China. The experimental results show that a perfect image fusion can be built up by using the image analytical solution and re-construction in the image frequency domain based on the physical characteristics of the image formation. The method has demonstrated that the results of the image fusion do not change spectral characteristics of the original image.

  16. Image fusion for dynamic contrast enhanced magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Leach Martin O

    2004-10-01

    Full Text Available Abstract Background Multivariate imaging techniques such as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI have been shown to provide valuable information for medical diagnosis. Even though these techniques provide new information, integrating and evaluating the much wider range of information is a challenging task for the human observer. This task may be assisted with the use of image fusion algorithms. Methods In this paper, image fusion based on Kernel Principal Component Analysis (KPCA is proposed for the first time. It is demonstrated that a priori knowledge about the data domain can be easily incorporated into the parametrisation of the KPCA, leading to task-oriented visualisations of the multivariate data. The results of the fusion process are compared with those of the well-known and established standard linear Principal Component Analysis (PCA by means of temporal sequences of 3D MRI volumes from six patients who took part in a breast cancer screening study. Results The PCA and KPCA algorithms are able to integrate information from a sequence of MRI volumes into informative gray value or colour images. By incorporating a priori knowledge, the fusion process can be automated and optimised in order to visualise suspicious lesions with high contrast to normal tissue. Conclusion Our machine learning based image fusion approach maps the full signal space of a temporal DCE-MRI sequence to a single meaningful visualisation with good tissue/lesion contrast and thus supports the radiologist during manual image evaluation.

  17. Performance Evaluation of Image Fusion Based on Discrete Cosine Transform

    Directory of Open Access Journals (Sweden)

    Ramkrishna Patil

    2013-05-01

    Full Text Available Discrete cosine transform (DCT is used for fusion of two different images and for image compression. Image fusion deals with creating an image by combining portions from other images to obtain an image in which all of the objects are in focus. Two multi focus images are used for image fusion. Different fusion algorithms are used and their performance is evaluated using evaluation metrics such as PSNR, SSIM, Spatial Frequency, Quality Index, Structural Content, Mean Absolute Error. Fusion performance is not good while using the algorithms with block size less than 64x64 and also the block size of 512x512. Contrast, amplitude and energy based image fusion algorithms performed well. The fused images are comparable with the reference image. Only the image size is considered but blurring percentage is not considered. These algorithms are very simple and might be suitable for real time applications

  18. Multiresolution image fusion scheme based on fuzzy region feature

    Institute of Scientific and Technical Information of China (English)

    LIU Gang; JING Zhong-liang; SUN Shao-yuan

    2006-01-01

    This paper proposes a novel region based image fusion scheme based on multiresolution analysis. The low frequency band of the image multiresolution representation is segmented into important regions, sub-important regions and background regions. Each feature of the regions is used to determine the region's degree of membership in the multiresolution representation,and then to achieve multiresolution representation of the fusion result. The final image fusion result can be obtained by using the inverse multiresolution transform. Experiments showed that the proposed image fusion method can have better performance than existing image fusion methods.

  19. Fusion Method for Remote Sensing Image Based on Fuzzy Integral

    Directory of Open Access Journals (Sweden)

    Hui Zhou

    2014-01-01

    Full Text Available This paper presents a kind of image fusion method based on fuzzy integral, integrated spectral information, and 2 single factor indexes of spatial resolution in order to greatly retain spectral information and spatial resolution information in fusion of multispectral and high-resolution remote sensing images. Firstly, wavelet decomposition is carried out to two images, respectively, to obtain wavelet decomposition coefficients of the two image and keep coefficient of low frequency of multispectral image, and then optimized fusion is carried out to high frequency part of the two images based on weighting coefficient to generate new fusion image. Finally, evaluation is carried out to the image after fusion with introduction of evaluation indexes of correlation coefficient, mean value of image, standard deviation, distortion degree, information entropy, and so forth. The test results show that this method integrated multispectral information and space high-resolution information in a better way, and it is an effective fusion method of remote sensing image.

  20. Multimodal image fusion with SIMS: Preprocessing with image registration.

    Science.gov (United States)

    Tarolli, Jay Gage; Bloom, Anna; Winograd, Nicholas

    2016-06-14

    In order to utilize complementary imaging techniques to supply higher resolution data for fusion with secondary ion mass spectrometry (SIMS) chemical images, there are a number of aspects that, if not given proper consideration, could produce results which are easy to misinterpret. One of the most critical aspects is that the two input images must be of the same exact analysis area. With the desire to explore new higher resolution data sources that exists outside of the mass spectrometer, this requirement becomes even more important. To ensure that two input images are of the same region, an implementation of the insight segmentation and registration toolkit (ITK) was developed to act as a preprocessing step before performing image fusion. This implementation of ITK allows for several degrees of movement between two input images to be accounted for, including translation, rotation, and scale transforms. First, the implementation was confirmed to accurately register two multimodal images by supplying a known transform. Once validated, two model systems, a copper mesh grid and a group of RAW 264.7 cells, were used to demonstrate the use of the ITK implementation to register a SIMS image with a microscopy image for the purpose of performing image fusion.

  1. Comparative Analysis of Various Image Fusion Techniques For Biomedical Images: A Review

    Directory of Open Access Journals (Sweden)

    Nayera Nahvi,

    2014-05-01

    Full Text Available Image Fusion is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images. This paper discusses implementation of DWT technique on different images to make a fused image having more information content. As DWT is the latest technique for image fusion as compared to simple image fusion and pyramid based image fusion, so we are going to implement DWT as the image fusion technique in our paper. Other methods such as Principal Component Analysis (PCA based fusion, Intensity hue Saturation (IHS Transform based fusion and high pass filtering methods are also discussed. A new algorithm is proposed using Discrete Wavelet transform and different fusion techniques including pixel averaging, min-max and max-min methods for medical image fusion. KEYWORDS:

  2. Fuzzy Methods and Image Fusion in a Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Jaroslav Vlach

    2012-01-01

    Full Text Available Although the basics of image processing were laid more than 50 years ago, significant development occurred mainly in the last 25 years with the entrance of personal computers and today's problems are already very sophisticated and quick. This article is a contribution to the study of the use of fuzzy logic methods and image fusion for image processing using LabVIEW tools for quality management, in this case especially in the jewelry industry.  

  3. Multispectral and panchromatic image fusion based on unsubsampled contourlet transform

    Science.gov (United States)

    Liu, Hui; Yuan, Yan; Su, Lijuan; Hu, Liang; Zhang, Siyuan

    2013-12-01

    In order to achieve the high-resolution multispectral image, we proposed an algorithm for MS image and PAN image fusion based on NSCT and improved fusion rule. This method takes into account two aspects, the spectral similarity between fused image and the original MS image and enhancing the spatial resolution of the fused image. According to local spectral similarity between MS and PAN images, it can help to select high frequency detail coefficients from PAN image, which are injected into MS image then. Thus, spectral distortion is limited; the spatial resolution is enhanced. The experimental results demonstrate that the proposed fusion algorithm perform some improvements in integrating MS and PAN images.

  4. An analysis of fusion algorithms for LWIR and visual images

    CSIR Research Space (South Africa)

    De Villiers, J

    2013-12-01

    Full Text Available fusion, he modified the red channel of the input image with the corre- sponding pixel value from the LWIR image. Li et. al. [6] used a similar channel-based fusion method to Zheng, however they also changed the colour space of the image to Y CBCR... and weighted those values using the LWIR value and then modified the fused image to look similar to a separate sample image. This work uses pixel-level fusion, since the fastest possible fusion method was required. Since the input visual images are colour each...

  5. Multisensor image fusion techniques in remote sensing

    Science.gov (United States)

    Ehlers, Manfred

    Current and future remote sensing programs such as Landsat, SPOT, MOS, ERS, JERS, and the space platform's Earth Observing System (Eos) are based on a variety of imaging sensors that will provide timely and repetitive multisensor earth observation data on a global scale. Visible, infrared and microwave images of high spatial and spectral resolution will eventually be available for all parts of the earth. It is essential that efficient processing techniques be developed to cope with the large multisensor data volumes. This paper discusses data fusion techniques that have proved successful for synergistic merging of SPOT HRV, Landsat TM and SIR-B images. It is demonstrated that these techniques can be used to improve rectification accuracies, to depicit greater cartographic detail, and to enhance spatial resolution in multisensor image data sets.

  6. Background Extraction Using Random Walk Image Fusion.

    Science.gov (United States)

    Hua, Kai-Lung; Wang, Hong-Cyuan; Yeh, Chih-Hsiang; Cheng, Wen-Huang; Lai, Yu-Chi

    2016-12-23

    It is important to extract a clear background for computer vision and augmented reality. Generally, background extraction assumes the existence of a clean background shot through the input sequence, but realistically, situations may violate this assumption such as highway traffic videos. Therefore, our probabilistic model-based method formulates fusion of candidate background patches of the input sequence as a random walk problem and seeks a globally optimal solution based on their temporal and spatial relationship. Furthermore, we also design two quality measures to consider spatial and temporal coherence and contrast distinctness among pixels as background selection basis. A static background should have high temporal coherence among frames, and thus, we improve our fusion precision with a temporal contrast filter and an optical-flow-based motionless patch extractor. Experiments demonstrate that our algorithm can successfully extract artifact-free background images with low computational cost while comparing to state-of-the-art algorithms.

  7. FUSION OF MULTI FOCUSED IMAGES USING HDWT FOR MACHINE VISION

    Directory of Open Access Journals (Sweden)

    S. Arumuga Perumal

    2011-10-01

    Full Text Available During image acquisition in machine vision, due to limited depth of field of lens, it is possible to take clear image of the objects in the scene which are in focus only. The remaining objects in the scene will be out of focus. A possible solution to bring clear images of all objects in the scene is image fusion. Image fusion is a process of combining multiple images to form the composite image with extended information content. This paper uses three band expansive higher density discrete wavelet transform to fuse two numbers of images focusing different objects in the same scene and also proposes three methods for image fusion. Experimental results on multi focused image fusion are presented in terms of root mean square, peak signal to noise ratio and quality index to illustrate the proposed fusion methods.

  8. A survey of infrared and visual image fusion methods

    Science.gov (United States)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian

    2017-09-01

    Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.

  9. Multispectral image filtering method based on image fusion

    Science.gov (United States)

    Zhang, Wei; Chen, Wei

    2015-12-01

    This paper proposed a novel filter scheme by image fusion based on Nonsubsampled ContourletTransform(NSCT) for multispectral image. Firstly, an adaptive median filter is proposed which shows great advantage in speed and weak edge preserving. Secondly, the algorithm put bilateral filter and adaptive median filter on image respectively and gets two denoised images. Then perform NSCT multi-scale decomposition on the de-noised images and get detail sub-band and approximate sub-band. Thirdly, the detail sub-band and approximate sub-band are fused respectively. Finally, the object image is obtained by inverse NSCT. Simulation results show that the method has strong adaptability to deal with the textural images. And it can suppress noise effectively and preserve the image details. This algorithm has better filter performance than the Bilateral filter standard and median filter and theirs improved algorithms for different noise ratio.

  10. ISAR imaging based on sparse subbands fusion

    Science.gov (United States)

    Li, Gang; Tian, Biao; Xu, Shiyou; Chen, Zengping

    2015-12-01

    Data fusion using subbands, which can obtain a higher range resolution without altering the bandwidth, hardware, and sampling rate of the radar system, has attracted more and more attention in recent years. A method of ISAR imaging based on subbands fusion and high precision parameter estimation of geometrical theory of diffraction (GTD) model is presented in this paper. To resolve the incoherence problem in subbands data, a coherent processing method is adopted. Based on an all-pole model, the phase difference of pole and scattering coefficient between each sub-band is used to effectively estimate the incoherent components. After coherent processing, the high and low frequency sub-band data can be expressed as a uniform all-pole model. The gapped-data amplitude and phase estimation (GAPES) algorithm is used to fill up the gapped band. Finally, fusion data is gained by high precision parameter estimation of GTD-all-pole model with full-band data, such as scattering center number, scattering center type and amplitude. The experimental results of simulated data show the validity of the algorithm.

  11. Color Multifocus Image Fusion Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    S. Savić

    2013-11-01

    Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.

  12. Comparative performance of image fusion methodologies in eddy current testing

    Directory of Open Access Journals (Sweden)

    S. Thirunavukkarasu

    2012-12-01

    Full Text Available Image fusion methodologies have been studied for improving the detectability of eddy current Nondestructive Testing (NDT. Pixel level image fusion has been performed on C-scan eddy current images of a sub-surface defect at two different frequencies. Multi-resolution analysis based Laplacian pyramid and wavelet fusion methodologies, statistical inference based Bayesian fusion and Principal Component Analysis (PCA based fusion methodologies have been studied towards improving the detectability of defects. The performance of the fusion methodologies has been compared using image metrics such as SNR and entropy. Bayesian based fusion methodology has shown better performance as compared to other methodologies with 33.75 dB improvement in the SNR and an improvement of 3.22 in the entropy.

  13. PCNN-Based Image Fusion in Compressed Domain

    Directory of Open Access Journals (Sweden)

    Yang Chen

    2015-01-01

    Full Text Available This paper addresses a novel method of image fusion problem for different application scenarios, employing compressive sensing (CS as the image sparse representation method and pulse-coupled neural network (PCNN as the fusion rule. Firstly, source images are compressed through scrambled block Hadamard ensemble (SBHE for its compression capability and computational simplicity on the sensor side. Local standard variance is input to motivate PCNN and coefficients with large firing times are selected as the fusion coefficients in compressed domain. Fusion coefficients are smoothed by sliding window in order to avoid blocking effect. Experimental results demonstrate that the proposed fusion method outperforms other fusion methods in compressed domain and is effective and adaptive in different image fusion applications.

  14. Efficient x-ray image enhancement algorithm using image fusion.

    Science.gov (United States)

    Shen, Kuan; Wen, Yumei; Cai, Yufang

    2009-01-01

    Multiresolution Analysis (MRA) plays an important role in image and signal processing fields, and it can extract information at different scales. Image fusion is a process of combining two or more images into an image, which extracts features from source images and provides more information than one image. The research presented in this article is aimed at the development of an automated imaging enhancement system in digital radiography (DR) images, which can clearly display all the defects in one image and don't bring blocking effect. In terms of characteristic of the collected radiographic signals, in the proposed scheme the subsection of signals is mapped to 0-255 gray scale to form several gray images and then these images are fused to form a new enhanced image. This article focuses on comparing the discriminating power of several multiresolution images decomposing methods using contrast pyramid, wavelet, and ridgelet respectively. The algorithms are extensively tested and the results are compared with standard image enhancement algorithms. Tests indicate that the fused images present a more detailed representation of the x-ray image. Detection, recognition, and search tasks may therefore benefit from this.

  15. A NOVEL REGION FEATURE USED IN MULTISENSOR IMAGE FUSION

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new region feature which emphasized the salience of target region and its neighbors is proposed.In region segmentation-based multisensor image fusion scheme, the presented feature can be extracted from each segmented region to determine the fusion weight. Experimental results demonstrate that the proposed feature has extensive application scope and it provides much more information for each region. It can not only be used in image fusion but also be used in other image processing applications.

  16. Query Specific Rank Fusion for Image Retrieval.

    Science.gov (United States)

    Zhang, Shaoting; Yang, Ming; Cour, Timothee; Yu, Kai; Metaxas, Dimitris N

    2015-04-01

    Recently two lines of image retrieval algorithms demonstrate excellent scalability: 1) local features indexed by a vocabulary tree, and 2) holistic features indexed by compact hashing codes. Although both of them are able to search visually similar images effectively, their retrieval precision may vary dramatically among queries. Therefore, combining these two types of methods is expected to further enhance the retrieval precision. However, the feature characteristics and the algorithmic procedures of these methods are dramatically different, which is very challenging for the feature-level fusion. This motivates us to investigate how to fuse the ordered retrieval sets, i.e., the ranks of images, given by multiple retrieval methods, to boost the retrieval precision without sacrificing their scalability. In this paper, we model retrieval ranks as graphs of candidate images and propose a graph-based query specific fusion approach, where multiple graphs are merged and reranked by conducting a link analysis on a fused graph. The retrieval quality of an individual method is measured on-the-fly by assessing the consistency of the top candidates' nearest neighborhoods. Hence, it is capable of adaptively integrating the strengths of the retrieval methods using local or holistic features for different query images. This proposed method does not need any supervision, has few parameters, and is easy to implement. Extensive and thorough experiments have been conducted on four public datasets, i.e., the UKbench, Corel-5K, Holidays and the large-scale San Francisco Landmarks datasets. Our proposed method has achieved very competitive performance, including state-of-the-art results on several data sets, e.g., the N-S score 3.83 for UKbench.

  17. Adaptive fusion of infrared and visible images in dynamic scene

    Science.gov (United States)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  18. Joint Multi-Focus Fusion and Bayer ImageRestoration

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    In this paper, a joint multifocus image fusion and Bayer pattern image restoration algorithm for raw images of single-sensor colorimaging devices is proposed. Different from traditional fusion schemes, the raw Bayer pattern images are fused before colorrestoration. Therefore, the Bayer image restoration operation is only performed one time. Thus, the proposed algorithm is moreefficient than traditional fusion schemes. In detail, a clarity measurement of Bayer pattern image is defined for raw Bayer patternimages, and the fusion operator is performed on superpixels which provide powerful grouping cues of local image feature. Theraw images are merged with refined weight map to get the fused Bayer pattern image, which is restored by the demosaicingalgorithm to get the full resolution color image. Experimental results demonstrate that the proposed algorithm can obtain betterfused results with more natural appearance and fewer artifacts than the traditional algorithms.

  19. Optimal image-fusion method based on nonsubsampled contourlet transform

    Science.gov (United States)

    Dou, Jianfang; Li, Jianxun

    2012-10-01

    The optimization of image fusion is researched. Based on the properties of nonsubsampled contourlet transform (NSCT), shift invariance, multiscale and multidirectional expansion, the fusion parameters of the multiscale decompostion scheme is optimized. In order to meet the requirement of feedback optimization, a new image fusion quality metric of image quality index normalized edge association (IQI-NEA) is built. A polynomial model is adopted to establish the relationship between the IQI_NEA metric and several decomposition levels. The optimal fusion includes four steps. First, the source images are decomposed in NSCT domain for several given levels. Second, principal component analysis is adopted to fuse the low frequency coefficients and the maximum fusion rule is utilized to fuse the high frequency coefficients to obtain the fused coefficients and the fused result is reconstructed from the obtained fused coefficients. Third, calculate the fusion quality metric IQI_NEA for the source images and fused images. Finally, the optimal fused image and optimal level are obtained through extremum properties of polynomials function. The visual and statistical results show that the proposed method has optimized the fusion performance compared to the existing fusion schemes, in terms of the visual effects and quantitative fusion evaluation indexes.

  20. Image fusion via nonlocal sparse K-SVD dictionary learning.

    Science.gov (United States)

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  1. Fusion de classifieurs pour la classification d'images sonar

    CERN Document Server

    Martin, Arnaud

    2008-01-01

    We present in this paper high level information fusion approaches available for numeric and symbolic data. We analyse the interest of such methods particularly for classifier fusion. A comparative study is presented for the seabed characterization form sonar images. Pattern recognition of the kind of sediments on sonar images is a hard problem because of the complexity of the data. We compare high level information fusion approach and show the obtained benefit.

  2. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    Science.gov (United States)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat

  3. Geophysical data fusion for subsurface imaging

    Science.gov (United States)

    Hoekstra, P.; Vandergraft, J.; Blohm, M.; Porter, D.

    1993-08-01

    A geophysical data fusion methodology is under development to combine data from complementary geophysical sensors and incorporate geophysical understanding to obtain three dimensional images of the subsurface. The research reported here is the first phase of a three phase project. The project focuses on the characterization of thin clay lenses (aquitards) in a highly stratified sand and clay coastal geology to depths of up to 300 feet. The sensor suite used in this work includes time-domain electromagnetic induction (TDEM) and near surface seismic techniques. During this first phase of the project, enhancements to the acquisition and processing of TDEM data were studied, by use of simulated data, to assess improvements for the detection of thin clay layers. Secondly, studies were made of the use of compressional wave and shear wave seismic reflection data by using state-of-the-art high frequency vibrator technology. Finally, a newly developed processing technique, called 'data fusion' was implemented to process the geophysical data, and to incorporate a mathematical model of the subsurface strata. Examples are given of the results when applied to real seismic data collected at Hanford, WA, and for simulated data based on the geology of the Savannah River Site.

  4. A framework of region-based dynamic image fusion

    Institute of Scientific and Technical Information of China (English)

    WANG Zhong-hua; QIN Zheng; LIU Yu

    2007-01-01

    A new framework of region-based dynamic image fusion is proposed. First, the technique of target detection is applied to dynamic images (image sequences) to segment images into different targets and background regions. Then different fusion rules are employed in different regions so that the target information is preserved as much as possible. In addition, steerable non-separable wavelet frame transform is used in the process of multi-resolution analysis, so the system achieves favorable characters of orientation and invariant shift. Compared with other image fusion methods, experimental results showed that the proposed method has better capabilities of target recognition and preserves clear background information.

  5. Multi-sensor image fusion and its applications

    CERN Document Server

    Blum, Rick S

    2005-01-01

    Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,

  6. Real-time image fusion involving diagnostic ultrasound

    DEFF Research Database (Denmark)

    Ewertsen, Caroline; Săftoiu, Adrian; Gruionu, Lucian G;

    2013-01-01

    The aim of our article is to give an overview of the current and future possibilities of real-time image fusion involving ultrasound. We present a review of the existing English-language peer-reviewed literature assessing this technique, which covers technical solutions (for ultrasound...... and endoscopic ultrasound), image fusion in several anatomic regions, and electromagnetic needle tracking....

  7. Hyperspectral remote sensing image classification based on decision level fusion

    Institute of Scientific and Technical Information of China (English)

    Peijun Du; Wei Zhang; Junshi Xia

    2011-01-01

    @@ To apply decision level fusion to hyperspectral remote sensing (HRS) image classification, three decision level fusion strategies are experimented on and compared, namely, linear consensus algorithm, improved evidence theory, and the proposed support vector machine (SVM) combiner.To evaluate the effects of the input features on classification performance, four schemes are used to organize input features for member classifiers.In the experiment, by using the operational modular imaging spectrometer (OMIS) II HRS image, the decision level fusion is shown as an effective way for improving the classification accuracy of the HRS image, and the proposed SVM combiner is especially suitable for decision level fusion.The results also indicate that the optimization of input features can improve the classification performance.%To apply decision level fusion to hyperspectral remote sensing (HRS) image classification, three decision level fusion strategies are experimented on and compared, namely, linear consensus algorithm, improved evidence theory, and the proposed support vector machine (SVM) combiner. To evaluate the effects of the input features on classification performance, four schemes are used to organize input features for member classifiers. In the experiment, by using the operational modular imaging spectrometer (OMIS) Ⅱ HRS image, the decision level fusion is shown as an effective way for improving the classification accuracy of the HRS image, and the proposed SVM combiner is especially suitable for decision level fusion. The results also indicate that the optimization of input features can improve the classification performance.

  8. A Novel Image Fusion Method Based on FRFT-NSCT

    Directory of Open Access Journals (Sweden)

    Peiguang Wang

    2013-01-01

    fused image is obtained by performing the inverse NSCT and inverse FRFT on the combined coefficients. Three modes images and three fusion rules are demonstrated in the proposed algorithm test. The simulation results show that the proposed fusion approach is better than the methods based on NSCT at the same parameters.

  9. Infrared and visible images fusion based on RPCA and NSCT

    Science.gov (United States)

    Fu, Zhizhong; Wang, Xue; Xu, Jin; Zhou, Ning; Zhao, Yufei

    2016-07-01

    Current infrared and visible images fusion algorithms cannot efficiently extract the object information in the infrared image while retaining the background information in visible image. To address this issue, we propose a new infrared and visible image fusion algorithm by taking advantage of robust principal component analysis (RPCA) and non-subsampled Contourlet transform (NSCT). Firstly, RPCA decomposition is performed on the infrared and visible images respectively to obtain their corresponding sparse matrixes, which can well represent the sparse feature of images. Secondly, the infrared and visible images are decomposed into low frequency sub-band and high-frequency sub-band coefficients by using NSCT. Subsequently, the sparse matrixes are used to guide the fusion rule of low frequency sub-band coefficients and high frequency sub-band coefficients. Experimental results demonstrate that our fusion algorithm can highlight the infrared objects as well as retain the background information in visible image.

  10. Performance measure for image fusion considering region information

    Institute of Scientific and Technical Information of China (English)

    LIU Gang; L(U) Xue-qin

    2007-01-01

    An objective performance measure for image fusion considering region information is proposed. The measure not only reflects how much the pixel level information that fused image takes from the source image, but also considers the region information between source images and fused image. The measure is meaningful and explicit. Several simulations were conducted to show that it accords well with the subjective evaluations.

  11. A novel statistical fusion rule for image fusion and its comparison in non subsampled contourlet transform domain and wavelet domain

    OpenAIRE

    Manu V T; Philomina Simon

    2012-01-01

    Image fusion produces a single fused image from a set of input images. A new method for image fusion is proposed based on Weighted Average Merging Method (WAMM) in the Non Subsampled Contourlet Transform (NSCT) domain. A performance analysis on various statistical fusion rules are also analysed both in NSCT and Wavelet domain. Analysis has been made on medical images, remote sensing images and multi focus images. Experimental results shows that the proposed method, WAMM obtained better resu...

  12. A new assessment method for image fusion quality

    Science.gov (United States)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  13. A novel image fusion method using WBCT and PCA

    Institute of Scientific and Technical Information of China (English)

    Qiguang Miao; Baoshu Wang

    2008-01-01

    A novel image fusion algorithm based on wavelet-based contourlet transform (WBCT)and principal component analysis(PCA)is proposed.The PCA method is adopted for the low-frequency components.Using the proposed algorithm to choose the greater of the active measures,the region consistency test is performed for the high-frequency components.Experiments show that the proposed method works better in preserving the edge and texture information than wavelet transform method and Laplacian pyramid (LP)method do in image fusion.Four indicators for the fusion image are given to compare the proposed method with other methods.

  14. Multifocus Image Fusion in Q-Shift DTCWT Domain Using Various Fusion Rules

    Directory of Open Access Journals (Sweden)

    Yingzhong Tian

    2016-01-01

    Full Text Available Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT. Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS and the Sum Modified Laplacian (SML. Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.

  15. Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Basu, Dipak Kumar; Nasipuri, Mita

    2011-01-01

    This paper presents a comparative study of two different methods, which are based on fusion and polar transformation of visual and thermal images. Here, investigation is done to handle the challenges of face recognition, which include pose variations, changes in facial expression, partial occlusions, variations in illumination, rotation through different angles, change in scale etc. To overcome these obstacles we have implemented and thoroughly examined two different fusion techniques through rigorous experimentation. In the first method log-polar transformation is applied to the fused images obtained after fusion of visual and thermal images whereas in second method fusion is applied on log-polar transformed individual visual and thermal images. After this step, which is thus obtained in one form or another, Principal Component Analysis (PCA) is applied to reduce dimension of the fused images. Log-polar transformed images are capable of handling complicacies introduced by scaling and rotation. The main objec...

  16. Current trends in medical image registration and fusion

    Directory of Open Access Journals (Sweden)

    Fatma El-Zahraa Ahmed El-Gamal

    2016-03-01

    Full Text Available Recently, medical image registration and fusion processes are considered as a valuable assistant for the medical experts. The role of these processes arises from their ability to help the experts in the diagnosis, following up the diseases’ evolution, and deciding the necessary therapies regarding the patient’s condition. Therefore, the aim of this paper is to focus on medical image registration as well as medical image fusion. In addition, the paper presents a description of the common diagnostic images along with the main characteristics of each of them. The paper also illustrates most well-known toolkits that have been developed to help the working with the registration and fusion processes. Finally, the paper presents the current challenges associated with working with medical image registration and fusion through illustrating the recent diseases/disorders that were addressed through such an analyzing process.

  17. Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov [National Institutes of Health, Radiology and Imaging Sciences (United States); Kruecker, Jochen, E-mail: jochen.kruecker@philips.com [Philips Research North America (United States); Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca [Ecole Polytechnique de Montreal, Department of Computer and Software Engineering, Institute of Biomedical Engineering (Canada); Kobeiter, Hicham, E-mail: hicham.kobeiter@gmail.com [CHU Henri Mondor, UPEC, Departments of Radiology and d' imagrie medicale (France); Venkatesan, Aradhana M., E-mail: VenkatesanA@cc.nih.gov; Levy, Elliot, E-mail: levyeb@cc.nih.gov; Wood, Bradford J., E-mail: bwood@cc.nih.gov [National Institutes of Health, Radiology and Imaging Sciences (United States)

    2012-10-15

    Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methods of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.

  18. The optimal algorithm for Multi-source RS image fusion.

    Science.gov (United States)

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  19. Fusion of colour and monochromatic images with edge emphasis

    Directory of Open Access Journals (Sweden)

    Rade M. Pavlović

    2014-02-01

    Full Text Available We propose a novel method to fuse true colour images with monochromatic non-visible range images that seeks to encode important structural information from monochromatic images efficiently but also preserve the natural appearance of the available true chromacity information. We utilise the β colour opponency channel of the lαβ colour as the domain to fuse information from the monochromatic input into the colour input by the way of robust grayscale fusion. This is followed by an effective gradient structure visualisation step that enhances the visibility of monochromatic information in the final colour fused image. Images fused using this method preserve their natural appearance and chromacity better than conventional methods while at the same time clearly encode structural information from the monochormatic input. This is demonstrated on a number of well-known true colour fusion examples and confirmed by the results of subjective trials on the data from several colour fusion scenarios. Introduction The goal of image fusion can be broadly defined as: the representation of visual information contained in a number of input images into a single fused image without distortion or loss of information. In practice, however, a representation of all available information from multiple inputs in a single image is almost impossible and fusion is generally a data reduction task.  One of the sensors usually provides a true colour image that by definition has all of its data dimensions already populated by the spatial and chromatic information. Fusing such images with information from monochromatic inputs in a conventional manner can severely affect natural appearance of the fused image. This is a difficult problem and partly the reason why colour fusion received only a fraction of the attention than better behaved grayscale fusion even long after colour sensors became widespread. Fusion method Humans tend to see colours as contrasts between opponent

  20. Multi-sensor image fusion using discrete wavelet frame transform

    Institute of Scientific and Technical Information of China (English)

    Zhenhua Li(李振华); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛)

    2004-01-01

    An algorithm is presented for multi-sensor image fusion using discrete wavelet frame transform (DWFT).The source images to be fused are firstly decomposed by DWFT. The fusion process is the combining of the source coefficients. Before the image fusion process, image segmentation is performed on each source image in order to obtain the region representation of each source image. For each source image, the salience of each region in its region representation is calculated. By overlapping all these region representations of all the source images, we produce a shared region representation to label all the input images. The fusion process is guided by these region representations. Region match measure of the source images is calculated for each region in the shared region representation. When fusing the similar regions, weighted averaging mode is performed; otherwise selection mode is performed. Experimental results using real data show that the proposed algorithm outperforms the traditional pyramid transform based or discrete wavelet transform (DWT) based algorithms in multi-sensor image fusion.

  1. Image fusion with nonsubsampled contourlet transform and sparse representation

    Science.gov (United States)

    Wang, Jun; Peng, Jinye; Feng, Xiaoyi; He, Guiqing; Wu, Jun; Yan, Kun

    2013-10-01

    Image fusion combines several images of the same scene into a fused image, which contains all important information. Multiscale transform and sparse representation can solve this problem effectively. However, due to the limited number of dictionary atoms, it is difficult to provide an accurate description for image details in the sparse representation-based image fusion method, and it needs a great deal of calculations. In addition, for the multiscale transform-based method, the low-pass subband coefficients are so hard to represent sparsely that they cannot extract significant features from images. In this paper, a nonsubsampled contourlet transform (NSCT) and sparse representation-based image fusion method (NSCTSR) is proposed. NSCT is used to perform a multiscale decomposition of source images to express the details of images, and we present a dictionary learning scheme in NSCT domain, based on which we can represent low-frequency information of the image sparsely in order to extract the salient features of images. Furthermore, it can reduce the calculation cost of the fusion algorithm with sparse representation by the way of nonoverlapping blocking. The experimental results show that the proposed method outperforms both the fusion method based on single sparse representation and multiscale decompositon.

  2. A novel image fusion approach based on compressive sensing

    Science.gov (United States)

    Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia

    2015-11-01

    Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.

  3. An adaptive fusion strategy of polarization image based on NSCT

    Science.gov (United States)

    Zhao, Chang-xia; Duan, Jin; Mo, Chun-he; Chen, Guang-qiu; Fu, Qiang

    2015-03-01

    An improved image fusion algorithm based on the NSCT is proposed in this paper. After decomposition NSCT method of multi-scale and multiple directions, polarization image was decomposed into two parts: low frequency sub-band and high frequency band-pass images. The fusion strategy of combining local regional energy and gradient structure similarity were used in low-frequency coefficients. While in the high-frequency band-pass coefficients part, the fusion strategy of the location spatial frequency as the correlation coefficient was used. The intensity image and polarization degree image are fused for improving the sharpness and contrast of the image. The experiments show that the algorithm is effective to improve the imaging quality in the turbid medium.

  4. Image fusion based on expectation maximization algorithm and steerable pyramid

    Institute of Scientific and Technical Information of China (English)

    Gang Liu(刘刚); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Jianxun Li(李建勋); Zhenhua Li(李振华); Henry Leung

    2004-01-01

    In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.

  5. Algorithm for image fusion via gradient correlation and difference statistics

    Science.gov (United States)

    Han, Jing; Wang, Li-juan; Zhang, Yi; Bai, Lian-fa; Mao, Ningjie

    2016-10-01

    In order to overcome the shortcoming of traditional image fusion based on discrete wavelet transform (DWT), a novel image fusion algorithm based on gradient correlation and difference statistics is proposed in this paper. The source images are decomposed into low-frequency coefficients and high-frequency coefficients by DWT: the former are fused by a local gradient correlation based scheme to extract the local feature information in source images; the latter are fused by a neighbor difference statistics based scheme to reserve the conspicuous edge information. Finally, the fused image is reconstructed by inverse DWT. Experimental results show that the proposed method performs better than other methods in reserving details.

  6. Research on compressive fusion for remote sensing images

    Science.gov (United States)

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin

    2014-02-01

    A compressive fusion of remote sensing images is presented based on the block compressed sensing (BCS) and non-subsampled contourlet transform (NSCT). Since the BCS requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images with structured random matrix. Further, the compressive measurements are decomposed with NSCT and their coefficients are fused by a rule of linear weighting. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction algorithm, together with consideration of blocking artifacts. The field test of remote sensing images fusion shows the validity of the proposed method.

  7. A MICRO-IMAGE FUSION ALGORITHM BASED ON REGION GROWING

    Institute of Scientific and Technical Information of China (English)

    Bai Cuixia; Jiang Gangyi; Yu Mei; Wang Yigang; Shao Feng; Peng Zongju

    2013-01-01

    Due to the limitation of Depth Of Field (DOF) of microscope,the regions which are not within the DOF will be blurring after imaging.Thus for micro-image fusion,the most important step is to identify the blurring regions within each micro-image,so as to remove their undesirable impacts on the fused image.In this paper,a fusion algorithm based on a novel region growing method is proposed for micro-image fusion.The local sharpness of micro-image is judged block by block,then blocks whose sharpness is lower than an adaptive threshold are used as seeds,and the sharpness of neighbors of each seed are evaluated again during the region growing until the blurring regions are identified completely.With the decreasing in block size,the obtained region segmentation becomes more and more accurate.Finally,the micro-images are fused with pixel-wise fusion rules.The experimental results show that the proposed algorithm benefits from the novel region segmentation and it is able to obtain fused micro-image with higher sharpness compared with some popular image fusion method.

  8. Fusion Core Imaging Experiment Based on the Shenguang Ⅱ Facility

    Institute of Scientific and Technical Information of China (English)

    郑志坚; 曹磊峰; 滕浩; 成金秀

    2002-01-01

    A laser fusion experiment was performed based on the Shenguang Ⅱ facility. An image of thermonuclear burning region was obtained with a Fresnel zone plate-coded imaging technique, where the laser-driven target was served as an α-particle source, and the coded image obtained in the experiment was reconstructed by a numerical way.

  9. Multifocus Image Fusion with PCNN in Shearlet Domain

    Directory of Open Access Journals (Sweden)

    Peng Geng

    2012-08-01

    Full Text Available The Shearlet form a tight frame at various scales and directions and are optimally sparse in representing images with edges. In this study, an image fusion method is proposed based on the Shearlet transform. Firstly, transform the image A and image B by the Shearlet transform. Secondly, PCNN is used for the every frequency subbands, which uses the number of output pulses from the PCNN’s neurons to select fusion coefficients. Finally an inverse Shearlet is applied on the new fused coefficients to reconstruct the fused image. Some experiments are performed, comparing the new algorithm with the DWT, Contourlet and NSCT method based on the PCNN. The experiment results show that the proposed fusion rule is effective and the new algorithm can provide better performance in fusing images.

  10. Image enhancement using thermal-visible fusion for human detection

    Science.gov (United States)

    Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd

    2017-09-01

    An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.

  11. RGB-NIR color image fusion: metric and psychophysical experiments

    Science.gov (United States)

    Hayes, Alex E.; Finlayson, Graham D.; Montagna, Roberto

    2015-01-01

    In this paper, we compare four methods of fusing visible RGB and near-infrared (NIR) images to produce a color output image, using a psychophysical experiment and image fusion quality metrics. The results of the psychophysical experiment show that two methods are significantly preferred to the original RGB image, and therefore RGB-NIR image fusion may be useful for photographic enhancement in those cases. The Spectral Edge method is the most preferred method, followed by the dehazing method of Schaul et al. We then investigate image fusion metrics which give results correlated with the psychophysical experiment results. We extend several existing metrics from 2 to 1 to M to N channel image fusion, as well as introducing new metrics based on output image colorfulness and contrast, and test them on our experimental data. While none of the individual metrics gives a ranking of the algorithms which exactly matches that of the psychophysical experiment, through a combination of two metrics we accurately rank the two leading fusion methods.

  12. Oil exploration oriented multi-sensor image fusion algorithm

    Science.gov (United States)

    Xiaobing, Zhang; Wei, Zhou; Mengfei, Song

    2017-04-01

    In order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT) in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.

  13. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    Science.gov (United States)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  14. Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images

    Science.gov (United States)

    Ragheb, Amr M.; Osman, Heba; Abbas, Alaa M.; Elkaffas, Saleh M.; El-Tobely, Tarek A.; Khamis, S.; Elhalawany, Mohamed E.; Nasr, Mohamed E.; Dessouky, Moawad I.; Al-Nuaimy, Waleed; Abd El-Samie, Fathi E.

    2012-12-01

    To identify objects in satellite images, multispectral (MS) images with high spectral resolution and low spatial resolution, and panchromatic (Pan) images with high spatial resolution and low spectral resolution need to be fused. Several fusion methods such as the intensity-hue-saturation (IHS), the discrete wavelet transform, the discrete wavelet frame transform (DWFT), and the principal component analysis have been proposed in recent years to obtain images with both high spectral and spatial resolutions. In this paper, a hybrid fusion method for satellite images comprising both the IHS transform and the DWFT is proposed. This method tries to achieve the highest possible spectral and spatial resolutions with as small distortion in the fused image as possible. A comparison study between the proposed hybrid method and the traditional methods is presented in this paper. Different MS and Pan images from Landsat-5, Spot, Landsat-7, and IKONOS satellites are used in this comparison. The effect of noise on the proposed hybrid fusion method as well as the traditional fusion methods is studied. Experimental results show the superiority of the proposed hybrid method to the traditional methods. The results show also that a wavelet denoising step is required when fusion is performed at low signal-to-noise ratios.

  15. Multi-focus image fusion using a guided-filter-based difference image.

    Science.gov (United States)

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  16. Multifocus image fusion scheme based on nonsubsampled contourlet transform

    Science.gov (United States)

    Zhou, Xinxing; Wang, Dianhong; Duan, Zhijuan; Li, Dongming

    2011-06-01

    This paper proposes a novel multifocus image fusion scheme based on nonsubsampled contourlet transform (NSCT). The selection principles for different subband coefficients in NSCT domain are discussed in detail. In order to be consistent with the characteristics of the human visual system and improve the robustness of the fusion algorithm to the noise, the NSCT-DCT energy is first developed. Based on it, the clarity measure and bandpass energy contrast are defined and employed to motivate the pulse coupled neural networks (PCNN) for the fusion of lowpass and bandpass subbands, respectively. The performance of the proposed fusion scheme is assessed by experiments and the results demonstrate that the algorithm proposed in the paper compares favorably to wavelet-based, contourlet-based and NSCTbased fusion algorithms in terms of visual appearances and objective criterion.

  17. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  18. Morphology-based fusion method of hyperspectral image

    Science.gov (United States)

    Yue, Song; Zhang, Zhijie; Ren, Tingting; Wang, Chensheng; Yu, Hui

    2014-11-01

    Hyperspectral image analysis method is widely used in all kinds of application including agriculture identification and forest investigation and atmospheric pollution monitoring. In order to accurately and steadily analyze hyperspectral image, considering the spectrum and spatial information which is provided by hyperspectral data together is necessary. The hyperspectral image has the characteristics of large amount of wave bands and information. Corresponding to the characteristics of hyperspectral image, a fast image fusion method that can fuse the hyperspectral image with high fidelity is studied and proposed in this paper. First of all, hyperspectral image is preprocessed before the morphological close operation. The close operation is used to extract wave band characteristic to reduce dimensionality of hyperspectral image. The spectral data is smoothed at the same time to avoid the discontinuity of the data by combination of spatial information and spectral information. On this basis, Mean-shift method is adopted to register key frames. Finally, the selected key frames by fused into one fusing image by the pyramid fusion method. The experiment results show that this method can fuse hyper spectral image in high quality. The fused image's attributes is better than the original spectral images comparing to the spectral images and reach the objective of fusion.

  19. MRI and PET images fusion based on human retina model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The diagnostic potential of brain positron emission tomography (PET) imaging is limited by low spatial resolution.For solving this problem we propose a technique for the fusion of PET and MRI images. This fusion is a trade-off between the spectral information extracted from PET images and the spatial information extracted from high spatial resolution MRI. The proposed method can control this trade-off. To achieve this goal, it is necessary to build a multiscale fusion model, based on the retinal cell photoreceptors model. This paper introduces general prospects of this model, and its application in multispectral medical image fusion. Results showed that the proposed method preserves more spectral features with less spatial distortion.transform methods, the best spectral and spatial quality is only achieved simultaneously with the proposed feature-based data fusion method. This method does not require resampling images, which is an advantage over the other methods, and can perform in any aspect ratio between the pixels of MRI and PET images.

  20. Data and image fusion for geometrical cloud characterization

    Energy Technology Data Exchange (ETDEWEB)

    Thorne, L.R.; Buch, K.A.; Sun, Chen-Hui; Diegert, C.

    1997-04-01

    Clouds have a strong influence on the Earth`s climate and therefore on climate change. An important step in improving the accuracy of models that predict global climate change, general circulation models, is improving the parameterization of clouds and cloud-radiation interactions. Improvements in the next generation models will likely include the effect of cloud geometry on the cloud-radiation parameterizations. We have developed and report here methods for characterizing the geometrical features and three-dimensional properties of clouds that could be of significant value in developing these new parameterizations. We developed and report here a means of generating and imaging synthetic clouds which we used to test our characterization algorithms; a method for using Taylor`s hypotheses to infer spatial averages from temporal averages of cloud properties; a computer method for automatically classifying cloud types in an image; and a method for producing numerical three-dimensional renderings of cloud fields based on the fusion of ground-based and satellite images together with meteorological data.

  1. The Effect of Multispectral Image Fusion Enhancement on Human Efficiency

    Science.gov (United States)

    2017-03-20

    Additionally, we test this on a simple stimulus and task experimental struc- ture to understand the basic impacts of fusion on the visual system. Ideal observer...information heatmap help us tackle the problem space of image fusion in relation to human testing ? As we have seen even within our own basic experiment ...strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing

  2. An Improved Infrared/Visible Fusion for Astronomical Images

    Directory of Open Access Journals (Sweden)

    Attiq Ahmad

    2015-01-01

    Full Text Available An undecimated dual tree complex wavelet transform (UDTCWT based fusion scheme for astronomical visible/IR images is developed. The UDTCWT reduces noise effects and improves object classification due to its inherited shift invariance property. Local standard deviation and distance transforms are used to extract useful information (especially small objects. Simulation results compared with the state-of-the-art fusion techniques illustrate the superiority of proposed scheme in terms of accuracy for most of the cases.

  3. A multifocus image fusion in nonsubsampled contourlet domain with variational fusion strategy

    Science.gov (United States)

    Ma, Ning; Luo, Limin; Zhou, Zeming; Liang, Miaoyuan

    2011-11-01

    Based on the variational idea, we propose a new fusion strategy for nonsubsampled contourlet transform (NSCT). For NSCT bandpass subband coefficients of input images, we take the main component of coefficients as the target and then build an extremum problem for energy functional to find the closest to the target one as the fused coefficient. We apply the gradient descent flow to minimize the functional and give the numerical scheme. The experimental results show that the proposed strategy outperforms state-of-the-art image fusion strategies for NSCT in terms of both visual quality and objective evaluation criteria.

  4. A novel statistical fusion rule for image fusion and its comparison in non subsampled contourlet transform domain and wavelet domain

    CERN Document Server

    T, Manu V

    2012-01-01

    Image fusion produces a single fused image from a set of input images. A new method for image fusion is proposed based on Weighted Average Merging Method (WAMM) in the NonSubsampled Contourlet Transform (NSCT) domain. A performance analysis on various statistical fusion rules are also analysed both in NSCT and Wavelet domain. Analysis has been made on medical images, remote sensing images and multi focus images. Experimental results shows that the proposed method, WAMM obtained better results in NSCT domain than the wavelet domain as it preserves more edges and keeps the visual quality intact in the fused image.

  5. Adaptive image fusion based on nonsubsampled contourlet transform

    Science.gov (United States)

    Zhang, Xiongmei; Li, Junshan; Yi, Zhaoxiang; Yang, Wei

    2007-11-01

    Multiresolution-based image fusion has been the focus of considerable research attention in recent years with a number of algorithms proposed. In most of the algorithms, however, the parameter configuration is usually based on experience. This paper proposes an adaptive image fusion algorithm based on the nonsubsampled contourlet transform (NSCT), which realizes automatic parameter adjustment and gets rid of the adverse effect caused by artificial factors. The algorithm incorporates the quality metric of structural similarity (SSIM) into the NSCT fusion framework. The SSIM value is calculated to assess the fused image quality, and then it is fed back to the fusion algorithm to achieve a better fusion by directing parameters (level of decomposition and flag of decomposition direction) adjustment. Based on the cross entropy, the local cross entropy (LCE) is constructed and used to determine an optimal choice of information source for the fused coefficients at each scale and direction. Experimental results show that the proposed method achieves the best fusion compared to three other methods judged on both the objective metrics and visual inspection and exhibits robust against varying noises.

  6. A Review of Various Transform Domain Digital Image Fusion for Multifocus Colored Images

    Directory of Open Access Journals (Sweden)

    Arun Begill

    2015-11-01

    Full Text Available Image fusion is the idea to enhance the image content by fusing two or more images obtained from visual sensor network. The main goal of image fusion is to eliminate redundant information and merging more useful information from source images. Various transform domain image fusion methods like DWT, SIDWT and DCT, ACMax DCT etc. are developed in recent years. Every method has its own advantages and disadvantages. ACMax Discrete cosine transform (DCT is very efficient approach for image fusion because of its energy compaction property as well as improve quality of image. Furthermore, this technique has also some disadvantages like color artifacts, noise and degrade the sharpness of edges. In this paper ACMax DCT method is integrated with saturation weighting and Joint Trilateral filter to get the high quality image and compare with traditional methods. The results have shown that ACMax DCT with Saturation weighting and Joint Trilateral filter method has outperformed the state of art techniques.

  7. Designing Image Operators for MRI-PET Image Fusion of the Brain

    Science.gov (United States)

    Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.

    2006-09-01

    Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.

  8. Analysis of Image Fusion Techniques for fingerprint Palmprint Multimodal Biometric System

    Directory of Open Access Journals (Sweden)

    S. Anu H Naira

    2015-01-01

    Full Text Available The multimodal Biometric System using multiple sources of information has been widely recognized. However computational models for multimodal biometrics recognition have only recently received attention. In this paper the fingerprint and palmprint images are chosen and fused together using image fusion methods. The biometric features are subjected to modality extraction. Different fusion methods like average fusion, minimum fusion, maximum fusion, discrete wavelet transform fusion and stationary wavelet transformfusion are implemented for the fusion of extracting modalities. The best fused template is analyzed by applying various fusion metrics. Here the DWT fused image provided better results.

  9. Alternate method for to realize image fusion; Metodo alterno para realizar fusion de imagenes

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, L.; Hernandez, F.; Fernandez, R. [Departamento de Medicina Nuclear, Imagenologia Diagnostica. Centro Medico de Xalapa, Veracruz (Mexico)

    2005-07-01

    At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)

  10. Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy.

    Science.gov (United States)

    Inoue, Hiroshi K; Nakajima, Atsushi; Sato, Hiro; Noda, Shin-Ei; Saitoh, Jun-Ichi; Suzuki, Yoshiyuki

    2015-03-01

    Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy.

  11. A Novel Image Fusion Algorithm for Visible and PMMW Images based on Clustering and NSCT

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available Aiming at the fusion of visible and Passive Millimeter Wave (PMMW images, a novel algorithm based on clustering and NSCT (Nonsubsampled Contourlet Transform is proposed. It takes advantages of the particular ability of PMMW image in presenting metal target and uses the clustering algorithm for PMMW image to extract the potential target regions. In the process of fusion, NSCT is applied to both input images, and then the decomposition coefficients on different scale are combined using different rules. At last, the fusion image is obtained by taking the inverse NSCT of the fusion coefficients. Some methodologies are used to evaluate the fusion results. Experiments demonstrate the superiority of the proposed algorithm for metal target detection compared to wavelet transform and Laplace transform.

  12. A Novel Image Fusion Algorithm for Visible and PMMW Images based on Clustering and NSCT

    OpenAIRE

    Xiong Jintao; Xie Weichao; Yang Jianyu; Fu Yanlong; Hu Kuan; Zhong Zhibin

    2016-01-01

    Aiming at the fusion of visible and Passive Millimeter Wave (PMMW) images, a novel algorithm based on clustering and NSCT (Nonsubsampled Contourlet Transform) is proposed. It takes advantages of the particular ability of PMMW image in presenting metal target and uses the clustering algorithm for PMMW image to extract the potential target regions. In the process of fusion, NSCT is applied to both input images, and then the decomposition coefficients on different scale are combined using differ...

  13. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    Directory of Open Access Journals (Sweden)

    Jesse S. Jin

    2010-10-01

    Full Text Available Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  14. Sensor data fusion for accurate cloud presence prediction using Dempster-Shafer evidence theory.

    Science.gov (United States)

    Li, Jiaming; Luo, Suhuai; Jin, Jesse S

    2010-01-01

    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  15. New false color mapping for image fusion

    NARCIS (Netherlands)

    Toet, A.; Walraven, J.

    1996-01-01

    A pixel based colour mapping algorithm is presented that produces a fused false colour rendering of two gray level images representing different sensor modalities. The result-ing fused false colour images have a higher information content than each of the original images and retain sensor-specific i

  16. New false color mapping for image fusion

    NARCIS (Netherlands)

    Toet, A.; Walraven, J.

    1996-01-01

    A pixel based colour mapping algorithm is presented that produces a fused false colour rendering of two gray level images representing different sensor modalities. The result-ing fused false colour images have a higher information content than each of the original images and retain sensor-specific i

  17. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  18. A Fusion Model for CPU Load Prediction in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dayu Xu

    2013-11-01

    Full Text Available Load prediction plays a key role in cost-optimal resource allocation and datacenter energy saving. In this paper, we use real-world traces from Cloud platform and propose a fusion model to forecast the future CPU loads. First, long CPU load time series data are divided into short sequences with same length from the historical data on the basis of cloud control cycle. Then we use kernel fuzzy c-means clustering algorithm to put the subsequences into different clusters. For each cluster, with current load sequence, a genetic algorithm optimized wavelet Elman neural network prediction model is exploited to predict the CPU load in next time interval. Finally, we obtain the optimal cloud computing CPU load prediction results from the cluster and its corresponding predictor with minimum forecasting error. Experimental results show that our algorithm performs better than other models reported in previous works.

  19. MRI-PET image fusion based on NSCT transform using local energy and local variance fusion rules.

    Science.gov (United States)

    Amini, Nasrin; Fatemizadeh, E; Behnam, Hamid

    2014-05-01

    Image fusion means to integrate information from one image to another image. Medical images according to the nature of the images are divided into structural (such as CT and MRI) and functional (such as SPECT, PET). This article fused MRI and PET images and the purpose is adding structural information from MRI to functional information of PET images. The images decomposed with Nonsubsampled Contourlet Transform and then two images were fused with applying fusion rules. The coefficients of the low frequency band are combined by a maximal energy rule and coefficients of the high frequency bands are combined by a maximal variance rule. Finally, visual and quantitative criteria were used to evaluate the fusion result. In visual evaluation the opinion of two radiologists was used and in quantitative evaluation the proposed fusion method was compared with six existing methods and used criteria were entropy, mutual information, discrepancy and overall performance.

  20. Digital image fusion systems: color imaging and low-light targets

    Science.gov (United States)

    Estrera, Joseph P.

    2009-05-01

    This paper presents digital image fusion (enhanced A+B) systems in color imaging and low light target applications. This paper will discuss first the digital sensors that are utilized in the noted image fusion applications which is a 1900x1086 (high definition format) CMOS imager coupled to a Generation III image intensifier for the visible/near infrared (NIR) digital sensor and 320x240 or 640x480 uncooled microbolometer thermal imager for the long wavelength infrared (LWIR) digital sensor. Performance metrics for these digital imaging sensors will be presented. The digital image fusion (enhanced A+B) process will be presented in context of early fused night vision systems such as the digital image fused system (DIFS) and the digital enhanced night vision goggle and later, the long range digitally fused night vision sighting system. Next, this paper will discuss the effects of user display color in a dual color digital image fusion system. Dual color image fusion schemes such as Green/Red, Cyan/Yellow, and White/Blue for image intensifier and thermal infrared sensor color representation, respectively, are discussed. Finally, this paper will present digitally fused imagery and image analysis of long distance targets in low light from these digital fused systems. The result of this image analysis with enhanced A+B digital image fusion systems is that maximum contrast and spatial resolution is achieved in a digital fusion mode as compared to individual sensor modalities in low light, long distance imaging applications. Paper has been cleared by DoD/OSR for Public Release under Ref: 08-S-2183 on August 8, 2008.

  1. Image fusion using non-separable wavelet frame

    Institute of Scientific and Technical Information of China (English)

    Hong Wang(王宏); Zhongliang Jing(敬忠良); Jianxun Li(李建勋)

    2003-01-01

    In this paper, an image fusion method is proposed based on the non-separable wavelet frame (NWF)for merging a high-resolution panchromatic image and a low-resolution multispectral image. The lowfrequency part of the panchromatic image is directly substituted by multispectral image. As a result, the multispectral information of the multispectral image can be preserved effectively in the fused image. Due to multiscale method for enhancing the high-frequency parts of the panchromatic image, spatial information of the fused image can be improved. Experimental results indicate that the proposed method outperforms the intensity-hue-saturation (IHS) transform, discrete wavelet transform and separable wavelet frame in preserving spectral and spatial information.

  2. Explosive Field Visualization Based on Image Fusion

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wen-yao; JIANG Ling-shuang

    2009-01-01

    m the composite sequence. Experimental results show that the new images integrate the advantages of sources, effectively improve the visualization, and disclose more information about explosive field.

  3. Image fusion using sparse overcomplete feature dictionaries

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  4. AERIAL IMAGES AND LIDAR DATA FUSION FOR DISASTER CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    J. C. Trinder

    2012-07-01

    Full Text Available Potential applications of airborne LiDAR for disaster monitoring include flood prediction and assessment, monitoring of the growth of volcanoes and assistance in the prediction of eruptions, assessment of crustal elevation changes due to earthquakes, and monitoring of structural damage after earthquakes. Change detection in buildings is an important task in the context of disaster monitoring, especially after earthquakes. Traditionally, change detection is usually done by using multi-temporal images through spectral analyses. This provides two-dimensional spectral information without including heights. This paper will describe the capability of aerial images and LiDAR data fusion for rapid change detection in elevations, and methods of assessment of damage in made-made structures. In order to detect and evaluate changes in buildings, LiDAR-derived DEMs and aerial images from two epochs were used, showing changes in urban buildings due to construction and demolition. The proposed modelling scheme comprises three steps, namely, data pre-processing, change detection, and validation. In the first step for data pre-processing, data registration was carried out based on the multi-source data. In the second step, changes were detected by combining change detection techniques such as image differencing (ID, principal components analysis (PCA, minimum noise fraction (MNF and post-classification comparison (P-C based on support vector machines (SVM, each of which performs differently, based on simple majority vote. In the third step and to meet the objectives, the detected changes were compared against reference data that was generated manually. The comparison is based on two criteria: overall accuracy; and commission and omission errors. The results showed that the average detection accuracies were: 78.9%, 81.4%, 82.7% and 82.8% for post-classification, image differencing, PCA and MNF respectively. On the other hand, the commission and omission errors of

  5. Underwater color image segmentation method via RGB channel fusion

    Science.gov (United States)

    Xuan, Li; Mingjun, Zhang

    2017-02-01

    Aiming at the problem of low segmentation accuracy and high computation time by applying existing segmentation methods for underwater color images, this paper has proposed an underwater color image segmentation method via RGB color channel fusion. Based on thresholding segmentation methods to conduct fast segmentation, the proposed method relies on dynamic estimation of the optimal weights for RGB channel fusion to obtain the grayscale image with high foreground-background contrast and reaches high segmentation accuracy. To verify the segmentation accuracy of the proposed method, the authors have conducted various underwater comparative experiments. The experimental results demonstrate that the proposed method is robust to illumination, and it is superior to existing methods in terms of both segmentation accuracy and computation time. Moreover, a segmentation technique is proposed for image sequences for real-time autonomous underwater vehicle operations.

  6. Perceptual evaluation of different image fusion schemes

    NARCIS (Netherlands)

    Toet, A.; Franken, E.M.

    2003-01-01

    Human scene recognition performance was tested with images of night-time outdoor scenes. The scenes were registered both with a dual band (visual and near infrared) image intensified low-light CCD camera (DII) and with a thermal middle wavelength band (3–5 mm) infrared (IR) camera. Fused imagery was

  7. Perceptual evaluation of different image fusion schemes

    NARCIS (Netherlands)

    Toet, A.; IJspeert, J.K.

    2001-01-01

    Human perceptual performance was tested with images of nighttime outdoor scenes. The scenes were registered both with a dual band (visual and near infrared) image intensified low-light CCD camera (DII) and with a thermal middle wavelength band (3-5 μm) infrared (IR) camera. Fused imagery was

  8. Perceptual evaluation of different image fusion schemes

    NARCIS (Netherlands)

    Toet, A.; Franken, E.M.

    2003-01-01

    Human scene recognition performance was tested with images of night-time outdoor scenes. The scenes were registered both with a dual band (visual and near infrared) image intensified low-light CCD camera (DII) and with a thermal middle wavelength band (3–5 mm) infrared (IR) camera. Fused imagery was

  9. Multimodality imaging of reporter gene expression using a novel fusion vector in living cells and animals

    Science.gov (United States)

    Gambhir; Sanjiv , Pritha; Ray

    2009-04-28

    Novel double and triple fusion reporter gene constructs harboring distinct imageable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.

  10. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    OpenAIRE

    Mingdong Li; Siyu Lai; Juan Wang

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can ...

  11. Medical image fusion using the convolution of Meridian distributions.

    Science.gov (United States)

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  12. Fusion of Night Vision and Thermal Images

    Science.gov (United States)

    2006-12-01

    A, mapA ]=imread(fullfile(path_nv, img_nv)); % read image and stores in matrix A [a1,a2,a3]= size(A...If image is rgb format convert to grayscale if isequal(a3,3) G1 = rgb2gray(A); [A, mapA ] = gray2ind(G1,256... mapA ]=imread(fullfile(path_nv, img_nv)); % read image and stores in matrix A [a1,a2,a3]= size(A

  13. Performance comparison of different graylevel image fusion schemes through a universal image quality index

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2003-01-01

    We applied a recently introduced universal image quality index Q that quantifies the distortion of a processed image relative to its original version, to assess the performance of different graylevel image fusion schemes. The method is as follows. First, we adopt an original test image as the refere

  14. Remote sensing image fusion based on Bayesian linear estimation

    Institute of Scientific and Technical Information of China (English)

    GE ZhiRong; WANG Bin; ZHANG LiMing

    2007-01-01

    A new remote sensing image fusion method based on statistical parameter estimation is proposed in this paper. More specially, Bayesian linear estimation (BLE) is applied to observation models between remote sensing images with different spatial and spectral resolutions. The proposed method only estimates the mean vector and covariance matrix of the high-resolution multispectral (MS) images, instead of assuming the joint distribution between the panchromatic (PAN) image and low-resolution multispectral image. Furthermore, the proposed method can enhance the spatial resolution of several principal components of MS images, while the traditional Principal Component Analysis (PCA) method is limited to enhance only the first principal component. Experimental results with real MS images and PAN image of Landsat ETM+ demonstrate that the proposed method performs better than traditional methods based on statistical parameter estimation,PCA-based method and wavelet-based method.

  15. ETS Gene Fusions as Predictive Biomarkers of Resistance to Radiation Therapy for Prostate Cancer

    Science.gov (United States)

    2015-10-01

    Award Number: W81XWH-10-1-0582 TITLE: ETS Gene Fusions as Predictive Biomarkers of Resistance to Radiation Therapy for Prostate Cancer PRINCIPAL...ETS gene fusion status associated with clinical outcomes following radiation therapy , by analyzing both the collected biomarker and clinical data...denotes absence of an ERG fusion). ETS gene fusions status did not predict outcomes following radiation therapy , as demonstrated by Kaplan Meier

  16. Remote Sensing Image Fusion Using Ica and Optimized Wavelet Transform

    Science.gov (United States)

    Hnatushenko, V. V.; Vasyliev, V. V.

    2016-06-01

    In remote-sensing image processing, fusion (pan-sharpening) is a process of merging high-resolution panchromatic and lower resolution multispectral (MS) imagery to create a single high-resolution color image. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. However, the pan-sharpening image produced by these methods gets the high color distortion of spectral information. In this paper, to minimize the spectral distortion we propose a remote sensing image fusion method which combines the Independent Component Analysis (ICA) and optimization wavelet transform. The proposed method is based on selection of multiscale components obtained after the ICA of images on the base of their wavelet decomposition and formation of linear forms detailing coefficients of the wavelet decomposition of images brightness distributions by spectral channels with iteratively adjusted weights. These coefficients are determined as a result of solving an optimization problem for the criterion of maximization of information entropy of the synthesized images formed by means of wavelet reconstruction. Further, reconstruction of the images of spectral channels is done by the reverse wavelet transform and formation of the resulting image by superposition of the obtained images. To verify the validity, the new proposed method is compared with several techniques using WorldView-2 satellite data in subjective and objective aspects. In experiments we demonstrated that our scheme provides good spectral quality and efficiency. Spectral and spatial quality metrics in terms of RASE, RMSE, CC, ERGAS and SSIM are used in our experiments. These synthesized MS images differ by showing a better contrast and clarity on the boundaries of the "object of interest - the background". The results show that the proposed approach performs better than some compared methods according to the performance metrics.

  17. A New Image Fusion Technique to Improve the Quality of Remote Sensing images

    Directory of Open Access Journals (Sweden)

    Aboubaker Milad Ahmed

    2013-01-01

    Full Text Available Image fusion is a process of producing a single fused image from a set of input images. In this paper a new fusion technique based on the use of independent component analysis (ICA and IHS transformation is proposed. A comparison of this new technique with PCA, IHS, and ICA-based fusion techniques is given. Quick Bird data are used to test these techniques, the output was evaluated using subjective comparison, statistical correlation, information entropy, mean square error, and standard deviation. The results of the proposed technique show higher performance compared to the other techniques.

  18. Multimodality image registration and fusion using neural network

    Institute of Scientific and Technical Information of China (English)

    Mostafa G Mostafa; Aly A Farag; Edward Essock

    2003-01-01

    Multimodality image registration and fusion are essential steps in building 3-D models from remotesensing data. We present in this paper a neural network technique for the registration and fusion of multimodali-ty remote sensing data for the reconstruction of 3-D models of terrain regions. A FeedForward neural network isused to fuse the intensity data sets with the spatial data set after learning its geometry. Results on real data arepresented. Human performance evaluation is assessed on several perceptual tests in order to evaluate the fusionresults.

  19. Detecting Changes Between Optical Images of Different Spatial and Spectral Resolutions: a Fusion-Based Approach

    CERN Document Server

    Ferraris, Vinicius; Wei, Qi; Chabert, Marie

    2016-01-01

    Change detection is one of the most challenging issues when analyzing remotely sensed images. Comparing several multi-date images acquired through the same kind of sensor is the most common scenario. Conversely, designing robust, flexible and scalable algorithms for change detection becomes even more challenging when the images have been acquired by two different kinds of sensors. This situation arises in case of emergency under critical constraints. This paper presents, to the best of authors' knowledge, the first strategy to deal with optical images characterized by dissimilar spatial and spectral resolutions. Typical considered scenarios include change detection between panchromatic or multispectral and hyperspectral images. The proposed strategy consists of a 3-step procedure: i) inferring a high spatial and spectral resolution image by fusion of the two observed images characterized one by a low spatial resolution and the other by a low spectral resolution, ii) predicting two images with respectively the...

  20. 3D Image Fusion to Localise Intercostal Arteries During TEVAR.

    Science.gov (United States)

    Koutouzi, G; Sandström, C; Skoog, P; Roos, H; Falkenberg, M

    2017-01-01

    Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA), but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR). The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT), the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA) and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia.

  1. PERFORMANCE EVALUATION OF SEVERAL FUSION APPROACHES FOR CCD/SAR IMAGES

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Several image fusion approaches for CCD/SAR images are studied and the performance evaluation of these fusion approaches is completed in this paper. Firstly, the preprocessing of CCD/SAR images before fusion is fulfilled. Then, the image fusion methods including linear superposition, nonlinear operator method and multiresolution methods, of which the multiresolution methods include Laplacian pyramid, ratio pyramid, contrast pyramid, gradient pyramid, morphological pyramid and discrete wavelet transform, are adopted to fuse two types of images. Lastly, the four performance measures, standard deviation, entropy, cross entropy and spatial frequency, are calculated to compare the fusion results by different fusion approaches in this paper. Experimental results show that contrast pyramid, morphology pyramid and discrete wavelet transformation in multiresolution approaches are more suitable for CCD/SAR image fusion than other ones proposed in this paper and the objective performance evaluation of CCD/SAR image fusion approaches are effective.

  2. An Improved Medical Image Fusion Algorithm for Anatomical and Functional Medical Images

    Institute of Scientific and Technical Information of China (English)

    CHEN Mei-ling; TAO Ling; QIAN Zhi-yu

    2009-01-01

    In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.

  3. MRI-ultrasound fusion biopsy for prediction of final prostate pathology

    Science.gov (United States)

    Le, Jesse D.; Stephenson, Samuel; Brugger, Michelle; Lu, David Y.; Lieu, Patricia; Sonn, Geoffrey A.; Natarajan, Shyam; Dorey, Frederick J.; Huang, Jiaoti; Margolis, Daniel J.A.; Reiter, Robert E.; Marks, Leonard S.

    2014-01-01

    PURPOSE To explore the impact of MRI-ultrasound (MRI-US) fusion prostate biopsy on prediction of final surgical pathology. MATERIALS AND METHODS 54 consecutive men undergoing radical prostatectomy at UCLA after Artemis fusion biopsy (Eigen, Grass Valley, CA) were included in this prospective IRB-approved pilot study. Using MRI-US fusion, tissue was obtained from a 12-point systematic grid (mapping biopsy, MBx) and from regions of interest detected by multi-parametric MRI (targeted biopsy, TBx). A single radiologist read all MRIs, and a single pathologist independently re-reviewed all biopsy and whole-mount pathology, blinded to prior interpretation and matched specimen. Gleason score (GS) concordance between biopsy and prostatectomy was the primary endpoint. RESULTS Mean age was 62 years, with median PSA 6.2 ng/ml. Final GS at prostatectomy was 6 (13%), 7 (70%), and 8–9 (17%). A tertiary pattern was detected in 17 (31%) men. 32/45 (71%) high-suspicion (image grade 4–5) MRI targets contained prostate cancer (CaP). The per-core cancer detection rate was 20% by MBx and 42% by TBx. The highest Gleason pattern at prostatectomy was detected by MBx in 54%, TBx in 54%, and the combination in 81% of cases. 17% were upgraded from fusion biopsy to final pathology; one case (2%) was downgraded. The combination of TBx and MBx was needed to obtain the best predictive accuracy. CONCLUSIONS In this pilot study, MR-US fusion biopsy allowed for prediction of final prostate pathology with greater accuracy than that reported previously using conventional methods (81% versus 40–65%). If confirmed, these results would have important clinical implications. PMID:24793118

  4. Multispectral image fusion for detecting land mines

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.; Roeske, F.; Donetti, J.G.; Fields, D.J.; sherwood, R.J.; Schaich, P.C.

    1995-04-01

    This report details a system which fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a variety of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts.

  5. Content Based Image Recognition by Information Fusion with Multiview Features

    Directory of Open Access Journals (Sweden)

    Rik Das

    2015-09-01

    Full Text Available Substantial research interest has been observed in the field of object recognition as a vital component for modern intelligent systems. Content based image classification and retrieval have been considered as two popular techniques for identifying the object of interest. Feature extraction has played the pivotal role towards successful implementation of the aforesaid techniques. The paper has presented two novel techniques of feature extraction from diverse image categories both in spatial domain and in frequency domain. The multi view features from the image categories were evaluated for classification and retrieval performances by means of a fusion based recognition architecture. The experimentation was carried out with four different popular public datasets. The proposed fusion framework has exhibited an average increase of 24.71% and 20.78% in precision rates for classification and retrieval respectively, when compared to state-of-the art techniques. The experimental findings were validated with a paired t test for statistical significance.

  6. Global optimization for multisensor fusion in seismic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Protopopescu, V.; Reister, D. [Oak Ridge National Lab., TN (United States). Center for Engineering Systems Advanced Research

    1997-06-01

    The accurate imaging of subsurface structures requires the fusion of data collected from large arrays of seismic sensors. The fusion process is formulated as an optimization problem and yields an extremely complex energy surface. Due to the very large number of local minima to be explored and escaped from, the seismic imaging problem has typically been tackled with stochastic optimization methods based on Monte Carlo techniques. Unfortunately, these algorithms are very cumbersome and computationally intensive. Here, the authors present TRUST--a novel deterministic algorithm for global optimization that they apply to seismic imaging. The excellent results demonstrate that TRUST may provide the necessary breakthrough to address major scientific and technological challenges in fields as diverse as seismic modeling, process optimization, and protein engineering.

  7. Color image fusion for concealed weapon detection

    NARCIS (Netherlands)

    Toet, A.

    2003-01-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the

  8. Color image fusion for concealed weapon detection

    NARCIS (Netherlands)

    Toet, A.

    2003-01-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the

  9. Large field-of-view range-gated laser imaging based on image fusion

    Science.gov (United States)

    Ren, Pengdao; Wang, Xinwei; Sun, Liang; You, Ruirong; Lei, Pingshun; Zhou, Yan

    2016-11-01

    Laser range-gated imaging has great potentials in remote night surveillance with far detection distance and high resolution, even if under bad weather conditions such as fog, snow and rain. However, the field of view (FOV) is smaller than large objects like buildings, towers and mountains, thus only parts of targets are observed in one single frame, so that it is difficult for targets identification. Apparently, large FOV is beneficial to solve the problem, but the detection range is not available due to low illumination density in a large field of illumination matching with the FOV. Therefore, a large field-of-view range-gated laser imaging is proposed based on image fusion in this paper. Especially an image fusion algorithm has been developed for low contrast images. First of all, an infrared laser range-gated system is established to acquire gate images with small FOV for three different scenarios at night. Then the proposed image fusion algorithm is used for generating panoramas for the three groups of images respectively. Compared with raw images directly obtained by the imaging system, the fused images have a larger FOV with more detail target information. The experimental results demonstrate that the proposed image fusion algorithm is effective to expand the FOV of range-gated imaging.

  10. Feature Fusion Based SVM Classifier for Protein Subcellular Localization Prediction.

    Science.gov (United States)

    Rahman, Julia; Mondal, Md Nazrul Islam; Islam, Md Khaled Ben; Hasan, Md Al Mehedi

    2016-12-18

    For the importance of protein subcellular localization in different branches of life science and drug discovery, researchers have focused their attentions on protein subcellular localization prediction. Effective representation of features from protein sequences plays a most vital role in protein subcellular localization prediction specially in case of machine learning techniques. Single feature representation-like pseudo amino acid composition (PseAAC), physiochemical property models (PPM), and amino acid index distribution (AAID) contains insufficient information from protein sequences. To deal with such problems, we have proposed two feature fusion representations, AAIDPAAC and PPMPAAC, to work with Support Vector Machine classifiers, which fused PseAAC with PPM and AAID accordingly. We have evaluated the performance for both single and fused feature representation of a Gram-negative bacterial dataset. We have got at least 3% more actual accuracy by AAIDPAAC and 2% more locative accuracy by PPMPAAC than single feature representation.

  11. Land mine detection using multispectral image fusion

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.; Roeske, F.; Donetti, J.G.; Fields, D.J.; Sherwood, R.J.; Schaich, P.C.

    1995-03-29

    Our system fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a variety of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts. We use a supervised learning pattern recognition approach to detecting the metal and plastic land mines. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in a two step process to classify a subimage. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the spectral bands add value to the detection system. The most important features from the various sensors are fused using a supervised learning pattern classifier (the probabilistic neural network). We present results of experiments to detect land mines from real data collected from an airborne platform, and evaluate the usefulness of fusing feature information from multiple spectral bands.

  12. Multimodal Medical Image Fusion by Adaptive Manifold Filter

    Directory of Open Access Journals (Sweden)

    Peng Geng

    2015-01-01

    Full Text Available Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  13. Performance Evaluation of Image Fusion Algorithms for Underwater Images-A study based on PCA and DWT

    Directory of Open Access Journals (Sweden)

    Ansar MK

    2014-11-01

    Full Text Available In this paper, a comparative study between two image fusion algorithm based on PCA and DWT is carried out in underwater image domain. Underwater image fusion is emerged as one of the main image fusion area, here two or more images will be fused by retaining the most desirable characteristics of each underwater images. The DWT technique is used to decompose the input image into four frequency sub bands and the low-low sub band images will be considered in fusion processing. In PCA method significant eigen values will be considered in fusion process to retain the important characteristics of the input images. The results acquired from both experiments are tabulated and compared by considering the statistical measures such as Peak Signal to Noise Ratio (PSNR, Mean Square Error (MSE and Entropy. Results shows that underwater image fusion based on DWT outperforms the PCA based method.

  14. An efficient multiple exposure image fusion in JPEG domain

    Science.gov (United States)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  15. A Novel Image Fusion Method Based on FRFT-NSCT

    OpenAIRE

    Peiguang Wang; Hua Tian; Wei Zheng

    2013-01-01

    Nonsubsampled Contourlet transform (NSCT) has properties such as multiscale, localization, multidirection, and shift invariance, but only limits the signal analysis to the time frequency domain. Fractional Fourier transform (FRFT) develops the signal analysis to fractional domain, has many super performances, but is unable to attribute the signal partial characteristic. A novel image fusion algorithm based on FRFT and NSCT is proposed and demonstrated in this paper. Firstly, take FRFT on t...

  16. Advanced Scintillator Detectors for Neutron Imaging in Inertial Confinement Fusion

    Science.gov (United States)

    Geppert-Kleinrath, Verena; Danly, Christopher; Merrill, Frank; Simpson, Raspberry; Volegov, Petr; Wilde, Carl

    2016-10-01

    The neutron imaging team at Los Alamos National Laboratory (LANL) has been providing two-dimensional neutron imaging of the inertial confinement fusion process at the National Ignition Facility (NIF) for over five years. Neutron imaging is a powerful tool in which position-sensitive detectors register neutrons emitted in the fusion reactions, producing a picture of the burning fuel. Recent images have revealed possible multi-dimensional asymmetries, calling for additional views to facilitate three-dimensional imaging. These will be along shorter lines of sight to stay within the existing facility at NIF. In order to field imaging capabilities equivalent to the existing system several technological challenges have to be met: high spatial resolution, high light output, and fast scintillator response to capture lower-energy neutrons, which have scattered from non-burning regions of fuel. Deuterated scintillators are a promising candidate to achieve the timing and resolution required; a systematic study of deuterated and non-deuterated polystyrene and liquid samples is currently ongoing. A test stand has been implemented to measure the response function, and preliminary data on resolution and light output have been obtained at the LANL Weapons Neutrons Research facility.

  17. Fourier domain image fusion for differential X-ray phase-contrast breast imaging.

    Science.gov (United States)

    Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-04-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well.

  18. Trigeminal neuralgia: Assessment with T2 VISTA and FLAIR VISTA fusion imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Jihoon; Kim, Sung Tae; Kim, Hyung-Jin; Choi, Jin Wook; Kim, Hye Jeong; Jeon, Pyoung; Kim, Keon Ha; Byun, Hong Sik [Sungkyunkwan University School of Medicine, Samsung Medical Center, Department of Radiology and Center for Imaging Science, Seoul (Korea, Republic of); Park, Kwan [Sungkyunkwan University School of Medicine, Samsung Medical Center, Department of Neurosurgery, Seoul (Korea, Republic of)

    2011-12-15

    To evaluate the neurovascular compression (NVC) in patients with trigeminal neuralgia (TN) using T2 VISTA and FLAIR VISTA fusion imaging. Sixty-six consecutive patients with TN who underwent MR imaging at 3-T between April 2008 and December 2010 were retrospectively reviewed. Multiplanar reconstructions (MPR) of T2 VISTA and FLAIR VISTA fusion imaging were used for image interpretation. The frequency of vascular contact, the segment of compression and the type of vessel were compared between the ipsilateral symptomatic side and the contralateral asymptomatic side. The frequency of vascular contact on the ipsilateral side and the contralateral side were 95.5% (63/66) and 74.2% (49/66), respectively. The frequency of indentation on the ipsilateral side and contralateral side were 74.2% (49/66) and 21.2% (14/66), and showed a statistically significant difference (p < 0.05). The sensitivity, specificity and odds ratio were 77.8%, 71.4% and 10.7, respectively. There were no significant differences in the involved segment or type of vessel between the ipsilateral side and contralateral side. MPR of T2 VISTA and FLAIR VISTA fusion imaging is useful in the detection of NVC in patients with TN. Vascular indentation can predict the presence of symptoms in patients with TN. (orig.)

  19. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.

    Science.gov (United States)

    Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing

    2012-04-01

    This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.

  20. Fusion for Evaluation of Image Classification in Uncertain Environments

    CERN Document Server

    Martin, Arnaud

    2008-01-01

    We present in this article a new evaluation method for classification and segmentation of textured images in uncertain environments. In uncertain environments, real classes and boundaries are known with only a partial certainty given by the experts. Most of the time, in many presented papers, only classification or only segmentation are considered and evaluated. Here, we propose to take into account both the classification and segmentation results according to the certainty given by the experts. We present the results of this method on a fusion of classifiers of sonar images for a seabed characterization.

  1. Infrared and visible image fusion using NSCT and GGD

    Science.gov (United States)

    Zhang, Xiuqiong; Liu, Cuiyin; Men, Tao; Qin, Hongyin; Wang, Mingrong

    2011-06-01

    In order to fuse the visible and infrared images captured in low visibility conditions, a method based on nonsubsampled contourlet transform (NSCT) and generalized Gaussian distribution (GGD) is proposed in this paper. The statistical character of the directional coefficients decomposed by NSCT meet the GGD. So, the coefficients are estimated using absolute moment estimation in local neighbor in directional coefficients. The estimated scale parameter is used to measure the saliency and compute the weight. The fused coefficients are obtained by the weighted average and are reconstructed the final fused image. Compared to the DWT and SIDWT, the proposed method has b superior fusion performance.

  2. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  3. Digital three-dimensional image fusion processes for planning and evaluating orthodontics and orthognathic surgery. A systematic review.

    Science.gov (United States)

    Plooij, Joanneke M; Maal, Thomas J J; Haers, Piet; Borstlap, Wilfred A; Kuijpers-Jagtman, Anne Marie; Bergé, Stefaan J

    2011-04-01

    The three important tissue groups in orthognathic surgery (facial soft tissues, facial skeleton and dentition) can be referred to as a triad. This triad plays a decisive role in planning orthognathic surgery. Technological developments have led to the development of different three-dimensional (3D) technologies such as multiplanar CT and MRI scanning, 3D photography modalities and surface scanning. An objective method to predict surgical and orthodontic outcome should be established based on the integration of structural (soft tissue envelope, facial skeleton and dentition) and photographic 3D images. None of the craniofacial imaging techniques can capture the complete triad with optimal quality. This can only be achieved by 'image fusion' of different imaging techniques to create a 3D virtual head that can display all triad elements. A systematic search of current literature on image fusion in the craniofacial area was performed. 15 articles were found describing 3D digital image fusion models of two or more different imaging techniques for orthodontics and orthognathic surgery. From these articles it is concluded, that image fusion and especially the 3D virtual head are accurate and realistic tools for documentation, analysis, treatment planning and long term follow up. This may provide an accurate and realistic prediction model.

  4. Multi-Focus Image Fusion Based on NSCT and NSST

    Science.gov (United States)

    Moonon, Altan-Ulzii; Hu, Jianwen

    2015-12-01

    In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.

  5. MRI and PET image fusion using fuzzy logic and image local features.

    Science.gov (United States)

    Javed, Umer; Riaz, Muhammad Mohsin; Ghafoor, Abdul; Ali, Syed Sohaib; Cheema, Tanveer Ahmed

    2014-01-01

    An image fusion technique for magnetic resonance imaging (MRI) and positron emission tomography (PET) using local features and fuzzy logic is presented. The aim of proposed technique is to maximally combine useful information present in MRI and PET images. Image local features are extracted and combined with fuzzy logic to compute weights for each pixel. Simulation results show that the proposed scheme produces significantly better results compared to state-of-art schemes.

  6. IMPROVING THE QUALITY OF NEAR-INFRARED IMAGING OF IN VIVOBLOOD VESSELS USING IMAGE FUSION METHODS

    DEFF Research Database (Denmark)

    Jensen, Andreas Kryger; Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard

    2009-01-01

    We investigate methods for improving the visual quality of in vivo images of blood vessels in the human forearm. Using a near-infrared light source and a dual CCD chip camera system capable of capturing images at visual and nearinfrared spectra, we evaluate three fusion methods in terms of their ...

  7. HALO: a reconfigurable image enhancement and multisensor fusion system

    Science.gov (United States)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  8. Synthetically Evaluation System for Multi-source Image Fusion and Experimental Analysis

    Institute of Scientific and Technical Information of China (English)

    XIAO Gang; JING Zhong-liang; WU Jian-min; LIU Cong-yi

    2006-01-01

    Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts,such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation.

  9. Thought–shape fusion and body image in eating disorders

    Directory of Open Access Journals (Sweden)

    Jáuregui-Lobera I

    2012-10-01

    Full Text Available Ignacio Jáuregui-Lobera,1 Patricia Bolaños-Ríos,2 Inmaculada Ruiz-Prieto21Department of Nutrition and Bromatology, Pablo de Olavide University, Seville, Spain; 2Behavioral Sciences Institute, Seville, SpainPurpose: The aim of this study was to analyze the relationships among thought–shape fusion (TSF, specific instruments to assess body image disturbances, and body image quality of life in eating disorder patients in order to improve the understanding of the links between body image concerns and a specific bias consisting of beliefs about the consequences of thinking about forbidden foods.Patients and methods: The final sample included 76 eating disorder patients (mean age 20.13 ± 2.28 years; 59 women and seven men. After having obtained informed consent, the following questionnaires were administered: Body Appreciation Scale (BAS, Body Image Quality of Life Inventory (BIQLI-SP, Body Shape Questionnaire (BSQ, Eating Disorders Inventory-2 (EDI-2, State-Trait Anxiety Inventory (STAI, Symptom Checklist-90-Revised (SCL-90-R and Thought-Shape Fusion Questionnaire (TSF-Q.Results: Significant correlations were found between TSF-Q and body image-related variables. Those with higher scores in TSF showed higher scores in the BSQ (P < 0.0001, Eating Disorder Inventory – Drive for Thinness (EDI-DT (P < 0.0001, and Eating Disorder Inventory – Body Dissatisfaction (EDI-BD (P < 0.0001. The same patients showed lower scores in the BAS (P < 0.0001. With respect to the psychopathological variables, patients with high TSF obtained higher scores in all SCL-90-R subscales as well as in the STAI.Conclusion: The current study shows the interrelations among different body image-related variables, TSF, and body image quality of life.Keywords: cognitive distortions, quality of life, body appreciation, psychopathology, anorexia nervosa, bulimia nervosa

  10. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  11. Prediction of off-target drug effects through data fusion.

    Science.gov (United States)

    Yera, Emmanuel R; Cleves, Ann E; Jain, Ajay N

    2014-01-01

    We present a probabilistic data fusion framework that combines multiple computational approaches for drawing relationships between drugs and targets. The approach has special relevance to identifying surprising unintended biological targets of drugs. Comparisons between molecules are made based on 2D topological structural considerations, based on 3D surface characteristics, and based on English descriptions of clinical effects. Similarity computations within each modality were transformed into probability scores. Given a new molecule along with a set of molecules sharing some biological effect, a single score based on comparison to the known set is produced, reflecting either 2D similarity, 3D similarity, clinical effects similarity or their combination. The methods were validated within acurated structural pharmacology database (SPDB) and further tested by blind application to data derived from the ChEMBL database. For prediction of off-target effects, 3D-similarity performed best as a single modality, but combining all methods produced performance gains. Striking examples of structurally surprising off-target predictions are presented.

  12. Image fusion and navigation platforms for percutaneous image-guided interventions.

    Science.gov (United States)

    Rajagopal, Manoj; Venkatesan, Aradhana M

    2016-04-01

    Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.

  13. Fusion of PET and MRI for Hybrid Imaging

    Science.gov (United States)

    Cho, Zang-Hee; Son, Young-Don; Kim, Young-Bo; Yoo, Seung-Schik

    Recently, the development of the fusion PET-MRI system has been actively studied to meet the increasing demand for integrated molecular and anatomical imaging. MRI can provide detailed anatomical information on the brain, such as the locations of gray and white matter, blood vessels, axonal tracts with high resolution, while PET can measure molecular and genetic information, such as glucose metabolism, neurotransmitter-neuroreceptor binding and affinity, protein-protein interactions, and gene trafficking among biological tissues. State-of-the-art MRI systems, such as the 7.0 T whole-body MRI, now can visualize super-fine structures including neuronal bundles in the pons, fine blood vessels (such as lenticulostriate arteries) without invasive contrast agents, in vivo hippocampal substructures, and substantia nigra with excellent image contrast. High-resolution PET, known as High-Resolution Research Tomograph (HRRT), is a brain-dedicated system capable of imaging minute changes of chemicals, such as neurotransmitters and -receptors, with high spatial resolution and sensitivity. The synergistic power of the two, i.e., ultra high-resolution anatomical information offered by a 7.0 T MRI system combined with the high-sensitivity molecular information offered by HRRT-PET, will significantly elevate the level of our current understanding of the human brain, one of the most delicate, complex, and mysterious biological organs. This chapter introduces MRI, PET, and PET-MRI fusion system, and its algorithms are discussed in detail.

  14. Automatic Defect Detection in X-Ray Images Using Image Data Fusion

    Institute of Scientific and Technical Information of China (English)

    TIAN Yuan; DU Dong; CAI Guorui; WANG Li; ZHANG Hua

    2006-01-01

    Automatic defect detection in X-ray images is currently a focus of much research at home and abroad. The technology requires computerized image processing, image analysis, and pattern recognition. This paper describes an image processing method for automatic defect detection using image data fusion which synthesizes several methods including edge extraction, wave profile analyses, segmentation with dynamic threshold, and weld district extraction. Test results show that defects that induce an abrupt change over a predefined extent of the image intensity can be segmented regardless of the number, location, shape, or size. Thus, the method is more robust and practical than the current methods using only one method.

  15. Optical asymmetric watermarking using modified wavelet fusion and diffractive imaging

    Science.gov (United States)

    Mehra, Isha; Nishchal, Naveen K.

    2015-05-01

    In most of the existing image encryption algorithms the generated keys are in the form of a noise like distribution with a uniform distributed histogram. However, the noise like distribution is an apparent sign indicating the presence of the keys. If the keys are to be transferred through some communication channels, then this may lead to a security problem. This is because; the noise like features may easily catch people's attention and bring more attacks. To address this problem it is required to transfer the keys to some other meaningful images to disguise the attackers. The watermarking schemes are complementary to image encryption schemes. In most of the iterative encryption schemes, support constraints play an important role of the keys in order to decrypt the meaningful data. In this article, we have transferred the support constraints which are generated by axial translation of CCD camera using amplitude-, and phase- truncation approach, into different meaningful images. This has been done by developing modified fusion technique in wavelet transform domain. The second issue is, in case, the meaningful images are caught by the attacker then how to solve the copyright protection. To resolve this issue, watermark detection plays a crucial role. For this purpose, it is necessary to recover the original image using the retrieved watermarks/support constraints. To address this issue, four asymmetric keys have been generated corresponding to each watermarked image to retrieve the watermarks. For decryption, an iterative phase retrieval algorithm is applied to extract the plain-texts from corresponding retrieved watermarks.

  16. Adaptive Fusion of Stochastic Information for Imaging Fractured Vadose Zones

    Science.gov (United States)

    Daniels, J.; Yeh, J.; Illman, W.; Harri, S.; Kruger, A.; Parashar, M.

    2004-12-01

    A stochastic information fusion methodology is developed to assimilate electrical resistivity tomography, high-frequency ground penetrating radar, mid-range-frequency radar, pneumatic/gas tracer tomography, and hydraulic/tracer tomography to image fractures, characterize hydrogeophysical properties, and monitor natural processes in the vadose zone. The information technology research will develop: 1) mechanisms and algorithms for fusion of large data volumes ; 2) parallel adaptive computational engines supporting parallel adaptive algorithms and multi-physics/multi-model computations; 3) adaptive runtime mechanisms for proactive and reactive runtime adaptation and optimization of geophysical and hydrological models of the subsurface; and 4) technologies and infrastructure for remote (pervasive) and collaborative access to computational capabilities for monitoring subsurface processes through interactive visualization tools. The combination of the stochastic fusion approach and information technology can lead to a new level of capability for both hydrologists and geophysicists enabling them to "see" into the earth at greater depths and resolutions than is possible today. Furthermore, the new computing strategies will make high resolution and large-scale hydrological and geophysical modeling feasible for the private sector, scientists, and engineers who are unable to access supercomputers, i.e., an effective paradigm for technology transfer.

  17. Implementation of multispectral image fusion system based on SoPC

    Science.gov (United States)

    Meng, Lingfei; Wang, Zhihui

    2013-10-01

    Combining the theory of wavelet transform based image fusion and SOPC design method, the authors uses SOPC as the core device to design and implement a image fusion system. The fusion system adopts the Verilog hardware description language, Dsp builder and Quartus II development platform together with macro module to complete the logic design and timing control of each module. In the fusion system, we can achieve simple pixel-level image fusion of two registered images. This design not only builds up an image fusion system based on SOPC in accident, but also provides a hardware design principle in SoPC for the future design and Implementation of more comprehensive function of image processing.

  18. Fusion

    CERN Document Server

    Mahaffey, James A

    2012-01-01

    As energy problems of the world grow, work toward fusion power continues at a greater pace than ever before. The topic of fusion is one that is often met with the most recognition and interest in the nuclear power arena. Written in clear and jargon-free prose, Fusion explores the big bang of creation to the blackout death of worn-out stars. A brief history of fusion research, beginning with the first tentative theories in the early 20th century, is also discussed, as well as the race for fusion power. This brand-new, full-color resource examines the various programs currently being funded or p

  19. A method based on IHS cylindrical transform model for quality assessment of image fusion

    Science.gov (United States)

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  20. Tissue identification with micro-magnetic resonance imaging in a caprine spinal fusion model

    NARCIS (Netherlands)

    M.P. Uffen; M.R. Krijnen; R.J. Hoogendoorn; G.J. Strijkers; V. Everts; P.I. Wuisman; T.H. Smit

    2008-01-01

    Nonunion is a major complication of spinal interbody fusion. Currently X-ray and computed tomography (CT) are used for evaluating the spinal fusion process. However, both imaging modalities have limitations in judgment of the early stages of this fusion process, as they only visualize mineralized bo

  1. Improved image fusion method based on NSCT and accelerated NMF.

    Science.gov (United States)

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  2. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    Directory of Open Access Journals (Sweden)

    Mingdong Li

    2012-05-01

    Full Text Available In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT domain and an Accelerated Non-negative Matrix Factorization (ANMF-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  3. Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

    Institute of Scientific and Technical Information of China (English)

    Xia; JING; Yan; BAO

    2015-01-01

    Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.

  4. Superresolution image reconstruction using panchromatic and multispectral image fusion

    Science.gov (United States)

    Elbakary, M. I.; Alam, M. S.

    2008-08-01

    Hyperspectral imagery is used for a wide variety of applications, including target detection, tacking, agricultural monitoring and natural resources exploration. The main reason for using hyperspectral imagery is that these images reveal spectral information about the scene that is not available in a single band. Unfortunately, many factors such as sensor noise and atmospheric scattering degrade the spatial quality of these images. Recently, many algorithms are introduced in the literature to improve the resolution of hyperspectral images using co-registered high special-resolution imagery such as panchromatic imagery. In this paper, we propose a new algorithm to enhance the spatial resolution of low resolution hyperspectral bands using strongly correlated and co-registered high special-resolution panchromatic imagery. The proposed algorithm constructs the superresolution bands corresponding to the low resolution bands to enhance the resolution using a global correlation enhancement technique. The global enhancement is based on the least square regression and the histogram matching to improve the estimated interpolation of the spatial resolution. The introduced algorithm is considered as an improvement for Price’s algorithm which uses the global correlation only for the spatial resolution enhancement. Numerous studies are conducted to investigate the effect of the proposed algorithm for achieving the enhancement compared to the traditional algorithm for superresolution enhancement. Experiments results obtained using hyperspectral data derived from airborne imaging sensor are presented to verify the superiority of the proposed algorithm.

  5. Infrared and multi-type images fusion algorithm based on contrast pyramid transform

    Science.gov (United States)

    Xu, Hua; Wang, Yan; Wu, Yujing; Qian, Yunsheng

    2016-09-01

    A fusion algorithm for infrared and multi-type images based on contrast pyramid transform (CPT) combined with Otsu method and morphology is proposed in this paper. Firstly, two sharpened images are combined to the first fused image based on information entropy weighted scheme. Afterwards, two enhanced images and the first fused one are decomposed into a series of images with different dimensions and spatial frequencies. To the low-frequency layer, the Otsu method is applied to calculate the optimal segmentation threshold of the first fused image, which is subsequently used to determine the pixel values in top layer fused image. With respect to the high-frequency layers, the top-bottom hats morphological transform is employed to each layer before maximum selection criterion. Finally, the series of decomposed images are reconstructed and then superposed with the enhanced image processed by morphological gradient operation as a second fusion to get the final fusion image. Infrared and visible images fusion, infrared and low-light-level (LLL) images fusion, infrared intensity and infrared polarization images fusion, and multi-focus images fusion are discussed in this paper. Both experimental results and objective metrics demonstrate the effectiveness and superiority of the proposed algorithm over the conventional ones used to compare.

  6. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    Science.gov (United States)

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  7. A Simple Fusion Method for Image Time Series Based on the Estimation of Image Temporal Validity

    Directory of Open Access Journals (Sweden)

    Mar Bisquert

    2015-01-01

    Full Text Available High-spatial-resolution satellites usually have the constraint of a low temporal frequency, which leads to long periods without information in cloudy areas. Furthermore, low-spatial-resolution satellites have higher revisit cycles. Combining information from high- and low- spatial-resolution satellites is thought a key factor for studies that require dense time series of high-resolution images, e.g., crop monitoring. There are several fusion methods in the bibliography, but they are time-consuming and complicated to implement. Moreover, the local evaluation of the fused images is rarely analyzed. In this paper, we present a simple and fast fusion method based on a weighted average of two input images (H and L, which are weighted by their temporal validity to the image to be fused. The method was applied to two years (2009–2010 of Landsat and MODIS (MODerate Imaging Spectroradiometer images that were acquired over a cropped area in Brazil. The fusion method was evaluated at global and local scales. The results show that the fused images reproduced reliable crop temporal profiles and correctly delineated the boundaries between two neighboring fields. The greatest advantages of the proposed method are the execution time and ease of use, which allow us to obtain a fused image in less than five minutes.

  8. A novel fusion imaging system for endoscopic ultrasound

    DEFF Research Database (Denmark)

    Gruionu, Lucian Gheorghe; Saftoiu, Adrian; Gruionu, Gabriel

    2016-01-01

    BACKGROUND AND OBJECTIVE: Navigation of a flexible endoscopic ultrasound (EUS) probe inside the gastrointestinal (GI) tract is problematic due to the small window size and complex anatomy. The goal of the present study was to test the feasibility of a novel fusion imaging (FI) system which uses...... electromagnetic (EM) sensors to co-register the live EUS images with the pre-procedure computed tomography (CT) data with a novel navigation algorithm and catheter. METHODS: An experienced gastroenterologist and a novice EUS operator tested the FI system on a GI tract bench top model. Also, the experienced...... time was 24.6 ± 6.6 min, while the time to reach the clinical target was 8.7 ± 4.2 min. CONCLUSIONS: The FI system is feasible for clinical use, and can reduce the learning curve for EUS procedures and improve navigation and targeting in difficult anatomic locations....

  9. Fusion imaging of real-time ultrasonography with CT or MRI for hepatic intervention

    Directory of Open Access Journals (Sweden)

    Min Woo Lee

    2014-10-01

    Full Text Available

    With the technical development of ultrasonography (US, electromagnetic tracking-based fusion imaging of real-time US and computed tomography/magnetic resonance (CT/MR images has been used for percutaneous hepatic intervention such as biopsy and radiofrequency ablation (RFA. Because of the fusion imaging technique, the fused CT or MR images show the same plane and move synchronously while performing real-time US. With this information, fusion imaging can enhance lesion detectability and reduce the false positive detection of focal hepatic lesions with poor sonographic conspicuity. Three-dimensional US can also be fused with realtime US for the percutaneous RFA of liver tumors requiring overlapping ablation. When fusion imaging is not sufficient for identifying small focal hepatic lesions, contrast-enhanced US can be added to fusion imaging.

  10. Fusion imaging of real-time ultrasonography with CT or MRI for hepatic intervention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Min Woo [Dept. of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2014-12-15

    With the technical development of ultrasonography (US), electromagnetic tracking-based fusion imaging of real-time US and computed tomography/magnetic resonance (CT/MR) images has been used for percutaneous hepatic intervention such as biopsy and radiofrequency ablation (RFA). Because of the fusion imaging technique, the fused CT or MR images show the same plane and move synchronously while performing real-time US. With this information, fusion imaging can enhance lesion detectability and reduce the false positive detection of focal hepatic lesions with poor sonographic conspicuity. Three-dimensional US can also be fused with realtime US for the percutaneous RFA of liver tumors requiring overlapping ablation. When fusion imaging is not sufficient for identifying small focal hepatic lesions, contrast-enhanced US can be added to fusion imaging.

  11. Research and Realization of Medical Image Fusion Based on Three-Dimensional Reconstruction

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new medical image fusion technique is presented. The method is based on three-dimensional reconstruction. After reconstruction, the three-dimensional volume data is normalized by three-dimensional coordinate conversion in the same way and intercepted through setting up cutting plane including anatomical structure, as a result two images in entire registration on space and geometry are obtained and the images are fused at last.Compared with traditional two-dimensional fusion technique, three-dimensional fusion technique can not only resolve the different problems existed in the two kinds of images, but also avoid the registration error of the two kinds of images when they have different scan and imaging parameter. The research proves this fusion technique is more exact and has no registration, so it is more adapt to arbitrary medical image fusion with different equipments.

  12. Medical Image Fusion Based on Rolling Guidance Filter and Spiking Cortical Model.

    Science.gov (United States)

    Shuaiqi, Liu; Jie, Zhao; Mingzhu, Shi

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. Although numerous medical image fusion methods have been proposed, most of these approaches are sensitive to the noise and usually lead to fusion image distortion, and image information loss. Furthermore, they lack universality when dealing with different kinds of medical images. In this paper, we propose a new medical image fusion to overcome the aforementioned issues of the existing methods. It is achieved by combining with rolling guidance filter (RGF) and spiking cortical model (SCM). Firstly, saliency of medical images can be captured by RGF. Secondly, a self-adaptive threshold of SCM is gained by utilizing the mean and variance of the source images. Finally, fused image can be gotten by SCM motivated by RGF coefficients. Experimental results show that the proposed method is superior to other current popular ones in both subjectively visual performance and objective criteria.

  13. Medical Image Fusion Based on Rolling Guidance Filter and Spiking Cortical Model

    Directory of Open Access Journals (Sweden)

    Liu Shuaiqi

    2015-01-01

    Full Text Available Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. Although numerous medical image fusion methods have been proposed, most of these approaches are sensitive to the noise and usually lead to fusion image distortion, and image information loss. Furthermore, they lack universality when dealing with different kinds of medical images. In this paper, we propose a new medical image fusion to overcome the aforementioned issues of the existing methods. It is achieved by combining with rolling guidance filter (RGF and spiking cortical model (SCM. Firstly, saliency of medical images can be captured by RGF. Secondly, a self-adaptive threshold of SCM is gained by utilizing the mean and variance of the source images. Finally, fused image can be gotten by SCM motivated by RGF coefficients. Experimental results show that the proposed method is superior to other current popular ones in both subjectively visual performance and objective criteria.

  14. Multi-modal Color Medical Image Fusion Using Quaternion Discrete Fourier Transform

    Science.gov (United States)

    Nawaz, Qamar; Xiao, Bin; Hamid, Isma; Jiao, Du

    2016-12-01

    Multimodal image fusion is a process of combining multiple images, generated by identical or diverse imaging modalities, to get precise inside information about the same body organ. In recent years, various multimodal image fusion algorithms have been proposed to fuse medical image. However, most of them focus on fusing grayscale images. This paper proposes a novel algorithm for the fusion of multimodal color medical images. The proposed algorithm divides source images into blocks, converts each RGB block into quaternion representation and transforms them from special domain to frequency domain by applying quaternion discrete Fourier transform. The fused coefficients are obtained by calculating and comparing contrast values of corresponding coefficients in transformed blocks. The resultant fused image is reconstructed by merging all the blocks after applying inverse quaternion discrete Fourier transform on each block. Experimental evaluation demonstrates that the proposed algorithm qualitatively outperforms many existing state-of-the-art multimodal image fusion algorithms.

  15. Oncofuse: a computational framework for the prediction of the oncogenic potential of gene fusions.

    Science.gov (United States)

    Shugay, Mikhail; Ortiz de Mendíbil, Iñigo; Vizmanos, José L; Novo, Francisco J

    2013-10-15

    Gene fusions resulting from chromosomal aberrations are an important cause of cancer. The complexity of genomic changes in certain cancer types has hampered the identification of gene fusions by molecular cytogenetic methods, especially in carcinomas. This is changing with the advent of next-generation sequencing, which is detecting a substantial number of new fusion transcripts in individual cancer genomes. However, this poses the challenge of identifying those fusions with greater oncogenic potential amid a background of 'passenger' fusion sequences. In the present work, we have used some recently identified genomic hallmarks of oncogenic fusion genes to develop a pipeline for the classification of fusion sequences, namely, Oncofuse. The pipeline predicts the oncogenic potential of novel fusion genes, calculating the probability that a fusion sequence behaves as 'driver' of the oncogenic process based on features present in known oncogenic fusions. Cross-validation and extensive validation tests on independent datasets suggest a robust behavior with good precision and recall rates. We believe that Oncofuse could become a useful tool to guide experimental validation studies of novel fusion sequences found during next-generation sequencing analysis of cancer transcriptomes. Oncofuse is a naive Bayes Network Classifier trained and tested using Weka machine learning package. The pipeline is executed by running a Java/Groovy script, available for download at www.unav.es/genetica/oncofuse.html.

  16. Single image dehazing using multiple transmission layer fusion

    Science.gov (United States)

    Yu, Shunyuan; Zhu, Hong; Fu, Zhengfang; Wang, Jing

    2016-03-01

    Methods for single image dehazing have been widely studied based on the atmospheric scattering model and dark channel prior (DCP); they usually adopt an additional refinement procedure such as guide filtering to restrain the halo artefacts, but it easily induces undesirable textures in the final transmission map, and further leads to an overall contrast reduction and detail blur. In this paper, an efficient approach was proposed to enhance single hazy images without any refined post-process, which is based on the strategy of multiple transmission layers fusion. In order to estimate the final transmission map adapting to different scenes reasonably, the multiple transmission layers were derived based on DCP with different kinds of adaptive local watch windows. To make sure the atmospheric light is estimated in the most haze-opaque region, the corresponding region was searched hierarchically with the quadtree subdivision method in the top part of the minimal channel of the input image. Finally, the hazy image was restored through solving the scattering model. Comparison experiments verify that the proposed method is straightforward and efficient, which can reduce the halo artefacts significantly, yielding satisfactory contrast and colour for varied hazy images.

  17. Imaging multiple intermediates of single-virus membrane fusion mediated by distinct fusion proteins.

    Science.gov (United States)

    Joo, Kye-Il; Tai, April; Lee, Chi-Lin; Wong, Clement; Wang, Pin

    2010-09-01

    Membrane fusion plays an essential role in the entry of enveloped viruses into target cells. The merging of viral and target cell membranes is catalyzed by viral fusion proteins, which involves multiple sequential steps in the fusion process. However, the fusion mechanisms mediated by different fusion proteins involve multiple transient intermediates that have not been well characterized. Here, we report a synthetic virus platform that allows us to better understand the different fusion mechanisms driven by the diverse types fusion proteins. The platform consists of lentiviral particles coenveloped with a surface antibody, which serves as the binding protein, along with a fusion protein derived from either influenza virus (HAmu) or Sindbis virus (SINmu). By using a single virus tracking technique, we demonstrated that both HAmu- and SINmu-bearing viruses enter cells through clathrin-dependent endocytosis, but they required different endosomal trafficking routes to initiate viral fusion. Direct observation of single viral fusion events clearly showed that hemifusion mediated by SINmu upon exposure to low pH occurs faster than that mediated by HAmu. Monitoring sequential fusion processes by dual labeling the outer and inner leaflets of viral membranes also revealed that the SINmu-mediated hemifusion intermediate is relatively long-lived as compared with that mediated by HAmu. Taken together, we have demonstrated that the combination of this versatile viral platform with the techniques of single virus tracking can be a powerful tool for revealing molecular details of fusion mediated by various fusion proteins.

  18. Performance Evaluation of Color Models in the Fusion of Functional and Anatomical Images.

    Science.gov (United States)

    Ganasala, Padma; Kumar, Vinod; Prasad, A D

    2016-05-01

    Fusion of the functional image with an anatomical image provides additional diagnostic information. It is widely used in diagnosis, treatment planning, and follow-up of oncology. Functional image is a low-resolution pseudo color image representing the uptake of radioactive tracer that gives the important metabolic information. Whereas, anatomical image is a high-resolution gray scale image that gives structural details. Fused image should consist of all the anatomical details without any changes in the functional content. This is achieved through fusion in de-correlated color model and the choice of color model has greater impact on the fusion outcome. In the present work, suitability of different color models for functional and anatomical image fusion is studied. After converting the functional image into de-correlated color model, the achromatic component of functional image is fused with an anatomical image by using proposed nonsubsampled shearlet transform (NSST) based image fusion algorithm to get new achromatic component with all the anatomical details. This new achromatic and original chromatic channels of functional image are converted to RGB format to get fused functional and anatomical image. Fusion is performed in different color models. Different cases of SPECT-MRI images are used for this color model study. Based on visual and quantitative analysis of fused images, the best color model for the stated purpose is determined.

  19. Enhancing Health Risk Prediction with Deep Learning on Big Data and Revised Fusion Node Paradigm

    Directory of Open Access Journals (Sweden)

    Hongye Zhong

    2017-01-01

    Full Text Available With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves the complex application of machine-learning algorithms, such as Bayesian fusions and neural network, for data extraction and logical inference. Deep learning, combined with information fusion paradigms, can be utilized to provide more comprehensive and reliable predictions from big health data. Based on the proposed framework, an experimental system is developed as an illustration for the framework implementation.

  20. CT and MR Image Fusion Scheme in Nonsubsampled Contourlet Transform Domain

    OpenAIRE

    Ganasala, Padma; Kumar, Vinod

    2014-01-01

    Fusion of CT and MR images allows simultaneous visualization of details of bony anatomy provided by CT image and details of soft tissue anatomy provided by MR image. This helps the radiologist for the precise diagnosis of disease and for more effective interventional treatment procedures. This paper aims at designing an effective CT and MR image fusion method. In the proposed method, first source images are decomposed by using nonsubsampled contourlet transform (NSCT) which is a shift-invaria...

  1. Optimal multi-focus contourlet-based image fusion algorithm selection

    Science.gov (United States)

    Lutz, Adam; Giansiracusa, Michael; Messer, Neal; Ezekiel, Soundararajan; Blasch, Erik; Alford, Mark

    2016-05-01

    Multi-focus image fusion is becoming increasingly prevalent, as there is a strong initiative to maximize visual information in a single image by fusing the salient data from multiple images for visualization. This allows an analyst to make decisions based on a larger amount of information in a more efficient manner because multiple images need not be cross-referenced. The contourlet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to pick up the directional and anisotropic properties while being designed to decompose the discrete two-dimensional domain. Many studies have been done to develop and validate algorithms for wavelet image fusion, but the contourlet has not been as thoroughly studied. When the contourlet coefficients for the wavelet coefficients are substituted in image fusion algorithms, it is contourlet image fusion. There are a multitude of methods for fusing these coefficients together and the results demonstrate that there is an opportunity for fusing coefficients together in the contourlet domain for multi-focus images. This paper compared the algorithms with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments to select the image fusion method.

  2. Investigation on reduced thermal models for simulating infrared images in fusion devices

    Science.gov (United States)

    Gerardin, J.; Aumeunier, M.-H.; Firdaouss, M.; Gardarein, J.-L.; Rigollet, F.

    2016-09-01

    In fusion facilities, the in-vessel wall receives high heat flux density up to 20 MW/m2. The monitoring of in-vessel components is usually ensured by infra-red (IR) thermography but with all-metallic walls, disturbance phenomenon as reflections may lead to inaccurate temperature estimates, potentially endangering machine safety. A full predictive photonic simulation is then used to assess accurately the IR measurements. This paper investigates some reduced thermal models (semi-infinite wall, thermal quadrupole) to predict the surface temperature from the particle loads on components for a given plasma scenario. The results are compared with a reference 3D Finite Element Method (Ansys Mechanical) and used as input for simulating IR images. The performances of reduced thermal models are analysed by comparing the resulting IR images.

  3. MULTI-SOURCE REMOTE SENSING IMAGE FUSION BASED ON SUPPORT VECTOR MACHINE

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Remote Sensing image fusion is an effective way to use the large volume of data from multi-source images.This paper introduces a new method of remote sensing image fusion based on support vector machine (SVM), using highspatial resolution data SPIN-2 and multi-spectral remote sensing data SPOT-4. Firstly, the new method is established bybuilding a model of remote sensing image fusion based on SVM. Then by using SPIN-2 data and SPOT-4 data, image classification fusion is tested. Finally, an evaluation of the fusion result is made in two ways. 1 ) From subjectivity assessment,the spatial resolution of the fused image is improved compared to the SPOT-4. And it is clearly that the texture of thefused image is distinctive. 2) From quantitative analysis, the effect of classification fusion is better. As a whole, the result shows that the accuracy of image fusion based on SVM is high and the SVM algorithm can be recommended for application in remote sensing image fusion processes.

  4. MULTI—SOURCE REMOTE SENSING IMAGE FUSION BASED ON SUPPORT VECTOR MACHINE

    Institute of Scientific and Technical Information of China (English)

    ZHAOShu-he; FENGXue-zhi; 等

    2002-01-01

    Remote Sensing image fusion is an effective way to use the large volume of data from multi-source images.This paper introduces a new method of remote sensing image fusion based on support vector machine(SVM),using high spatial resolution data SPIN-2 and multi-spectral remote sensing data SPOT-4.Firstly,the new method is established by building a model of remote sensing image fusion based on SVM.Then by using SPIN-2 data and SPOT-4 data ,image classify-cation fusion in tested.Finally,and evaluation of the fusion result is made in two ways.1)From subjectivity assessment,the spatial resolution of the fused image is improved compared to the SPOT-4.And it is clearly that the texture of the fused image is distinctive.2)From quantitative analysis,the effect of classification fusion is better.As a whole ,the re-sult shows that the accuracy of image fusion based on SVM is high and the SVM algorithm can be recommended for applica-tion in remote sensing image fusion processes.

  5. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

    Science.gov (United States)

    Bai, Xiangzhi

    2015-01-01

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229

  6. An Image Fusion Method Based on NSCT and Dual-channel PCNN Model

    Directory of Open Access Journals (Sweden)

    Nianyi Wang

    2014-02-01

    Full Text Available NSCT is one of useful multiscale geometric analysis tools, which takes full advantage of geometric regularity of image intrinsic structures. The dual-channel PCNN is a simplified PCNN model, which can process multiple images by a single PCNN. This saves time in the process of image fusion and cuts down computational complexity. In this paper, we present a new image fusion scheme based on NSCT and dual-channel PCNN. Firstly, the fusion rules of subband coefficients of NSCT are discussed. For the fusion rule of low frequency coefficients, the maximum selection rule (MSR is used. Then, for the fusion rule of high frequency coefficients, spatial frequency (SF of each high frequency subband is considered as the gradient features of images to motivate dual-channel PCNN networks and generate pulse of neurons. At last, fused image is obtained by using the inverse NSCT transform. In order to show that the proposed method can deal with image fusion, we used two pairs of images as our experimental subjects. The proposed method is compared with other five methods. The performance of various methods is mathematically evaluated by using four image quality evaluation criteria. Experimental comparisons conducted on different fusion methods prove the effectiveness of the proposed fusion method

  7. The impact of body image-related cognitive fusion on eating psychopathology.

    Science.gov (United States)

    Trindade, Inês A; Ferreira, Cláudia

    2014-01-01

    Recent research has shown that cognitive fusion underlies psychological inflexibility and in consequence various forms of psychopathology. However, the role of cognitive fusion specifically related to body image on eating psychopathology remained to be examined. The current study explores the impact of cognitive fusion concerning body image in the relation between acknowledged related risk factors and eating psychopathology in a sample of 342 female students. The impact of body dissatisfaction and social comparison through physical appearance on eating psychopathology was partially mediated by body image-related cognitive fusion. The results highlight the importance of cognitive defusion in the treatment of eating disorders.

  8. Medical Image Fusion Based on Feature Extraction and Sparse Representation.

    Science.gov (United States)

    Fei, Yin; Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.

  9. Advances and challenges in deformable image registration: From image fusion to complex motion modelling.

    Science.gov (United States)

    Schnabel, Julia A; Heinrich, Mattias P; Papież, Bartłomiej W; Brady, Sir J Michael

    2016-10-01

    Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field.

  10. Geophysical data fusion for subsurface imaging. Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, P.; Vandergraft, J.; Blohm, M.; Porter, D.

    1993-08-01

    A geophysical data fusion methodology is under development to combine data from complementary geophysical sensors and incorporate geophysical understanding to obtain three dimensional images of the subsurface. The research reported here is the first phase of a three phase project. The project focuses on the characterization of thin clay lenses (aquitards) in a highly stratified sand and clay coastal geology to depths of up to 300 feet. The sensor suite used in this work includes time-domain electromagnetic induction (TDEM) and near surface seismic techniques. During this first phase of the project, enhancements to the acquisition and processing of TDEM data were studied, by use of simulated data, to assess improvements for the detection of thin clay layers. Secondly, studies were made of the use of compressional wave and shear wave seismic reflection data by using state-of-the-art high frequency vibrator technology. Finally, a newly developed processing technique, called ``data fusion,`` was implemented to process the geophysical data, and to incorporate a mathematical model of the subsurface strata. Examples are given of the results when applied to real seismic data collected at Hanford, WA, and for simulated data based on the geology of the Savannah River Site.

  11. Lipid tail protrusion in simulations predicts fusogenic activity of influenza fusion peptide mutants and conformational models.

    Directory of Open Access Journals (Sweden)

    Per Larsson

    Full Text Available Fusion peptides from influenza hemagglutinin act on membranes to promote membrane fusion, but the mechanism by which they do so remains unknown. Recent theoretical work has suggested that contact of protruding lipid tails may be an important feature of the transition state for membrane fusion. If this is so, then influenza fusion peptides would be expected to promote tail protrusion in proportion to the ability of the corresponding full-length hemagglutinin to drive lipid mixing in fusion assays. We have performed molecular dynamics simulations of influenza fusion peptides in lipid bilayers, comparing the X-31 influenza strain against a series of N-terminal mutants. As hypothesized, the probability of lipid tail protrusion correlates well with the lipid mixing rate induced by each mutant. This supports the conclusion that tail protrusion is important to the transition state for fusion. Furthermore, it suggests that tail protrusion can be used to examine how fusion peptides might interact with membranes to promote fusion. Previous models for native influenza fusion peptide structure in membranes include a kinked helix, a straight helix, and a helical hairpin. Our simulations visit each of these conformations. Thus, the free energy differences between each are likely low enough that specifics of the membrane environment and peptide construct may be sufficient to modulate the equilibrium between them. However, the kinked helix promotes lipid tail protrusion in our simulations much more strongly than the other two structures. We therefore predict that the kinked helix is the most fusogenic of these three conformations.

  12. Effective Multifocus Image Fusion Based on HVS and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-01-01

    Full Text Available The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS and back propagation (BP neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.

  13. Weighted feature fusion for content-based image retrieval

    Science.gov (United States)

    Soysal, Omurhan A.; Sumer, Emre

    2016-07-01

    The feature descriptors such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-up Robust Features) and ORB (Oriented FAST and Rotated BRIEF) are known as the most commonly used solutions for the content-based image retrieval problems. In this paper, a novel approach called "Weighted Feature Fusion" is proposed as a generic solution instead of applying problem-specific descriptors alone. Experiments were performed on two basic data sets of the Inria in order to improve the precision of retrieval results. It was found that in cases where the descriptors were used alone the proposed approach yielded 10-30% more accurate results than the ORB alone. Besides, it yielded 9-22% and 12-29% less False Positives compared to the SIFT alone and SURF alone, respectively.

  14. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  15. Image Sequence Fusion and Denoising Based on 3D Shearlet Transform

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2014-01-01

    Full Text Available We propose a novel algorithm for image sequence fusion and denoising simultaneously in 3D shearlet transform domain. In general, the most existing image fusion methods only consider combining the important information of source images and do not deal with the artifacts. If source images contain noises, the noises may be also transferred into the fusion image together with useful pixels. In 3D shearlet transform domain, we propose that the recursive filter is first performed on the high-pass subbands to obtain the denoised high-pass coefficients. The high-pass subbands are then combined to employ the fusion rule of the selecting maximum based on 3D pulse coupled neural network (PCNN, and the low-pass subband is fused to use the fusion rule of the weighted sum. Experimental results demonstrate that the proposed algorithm yields the encouraging effects.

  16. Multi-focus image fusion based on spatial frequency and morphological operators

    Institute of Scientific and Technical Information of China (English)

    Bin Yang; Shutao Li

    2007-01-01

    A new multi-focus image fusion method using spatial frequency (SF) and morphological operators is proposed. Firstly, the focus regions are detected using SF criteria. Then the morphological operators are used to smooth the regions. Finally the fused image is constructed by cutting and pasting the focused regions of the source images. Experimental results show that the proposed algorithm performs well for multi-focus image fusion.

  17. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    OpenAIRE

    Yong Yang; Song Tong; Shuying Huang; Pan Lin

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. ...

  18. Multilevel depth and image fusion for human activity detection.

    Science.gov (United States)

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  19. THERMAL AND VISIBLE SATELLITE IMAGE FUSION USING WAVELET IN REMOTE SENSING AND SATELLITE IMAGE PROCESSING

    Directory of Open Access Journals (Sweden)

    A. H. Ahrari

    2017-09-01

    Full Text Available Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar and different decomposition filters (mean.linear,ma,min and rand for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.

  20. Image Fusion of CT/MRI using DWT , PCA Methods and Analog DSP Processor

    Directory of Open Access Journals (Sweden)

    Sonali Mane

    2014-02-01

    Full Text Available Medical image fusion is a technique in which useful information from two or more recorded medical images is integrated into a new image to offer as much details as possible for diagnosis. The fusion of different modality images are Computer Tomography (CT and Magnetic Resonance Imaging (MRI by integrating the DWT & PCA methods. The decomposed coefficients of Discrete Wavelet Transformation (DWT are applied with the Principal Component Analysis (PCA to get fused image information. Choose decomposed coefficients by fusion rule and using inverse DWT to get the fussed image of two modalities CT and MRI. The RMSE and PSNR analysis shows better improvement on results. For the proposed fusion enhancement technique going to implement on the processor based kit or will show the hardware support.

  1. Computed tomography and magnetic resonance fusion imaging in cholesteatoma preoperative assessment.

    Science.gov (United States)

    Campos, Agustín; Mata, Federico; Reboll, Rosa; Peris, María Luisa; Basterra, Jorge

    2017-03-01

    The purpose of this study is to describe a method for developing fusion imaging for the preoperative evaluation of cholesteatoma. In 33 patients diagnosed with cholesteatoma, a high-resolution temporal bone computed tomography (CT) scan without intravenous contrast and propeller diffusion-weighted magnetic resonance imaging (MRI) were performed. Both studies were then sent to the BrainLAB work station, where the images were fused to obtain a morphological and color map. Intraoperative findings coincided with fusion CT-MRI imaging in all but two patients. In addition, one false positive and one false negative case were observed. CT and diffusion-weighted MRI are complementary techniques that should be employed to assess a cholesteatoma prior to surgery in many cases. Hence, to combine the advantages of each technique, we developed a fusion image technique similar to those that are routinely employed for radiotherapy planning and positron emission tomography-CT imaging. Fusion images can prove useful in selected cases.

  2. A new method of medical image fusion based on nonsubsampled contourlet transform

    Science.gov (United States)

    Xu, Xuebin; Zhang, Xinman; Zhang, Deyun

    2008-12-01

    To improve the normal medical image fusion algorithm in order to avoid the loss of the detailed information in the processes of medical image fusion, a multiscale medical image fusion method based on nonsubsampled contourlet transform(NSCT) is proposed in this paper. First, the source images(MRI and CT images) are decomposed by using nonsubsampled contourlet transform. Then, the details of contourlet coefficients are fused on each corresponding levels with a vision feature fusion operator. Finally, the fused image will be obtained by taking the inverse nonsubsampled contourlet transformation. The experimental results show that the effect of the nonsubsampled contourlet-based method is obviously improved, and the proposed method can effectively preserve the detailed information of the source images.

  3. A homological multi-information fusion method for processing gastric tumor tissue pathological images

    Institute of Scientific and Technical Information of China (English)

    LI Tian-gang; WANG Su-pin; QIN Chen

    2005-01-01

    A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images. The main purpose is that fewer procedures are used to provide more information and the result images could be easier to be understood than any other methods. First,multi-scale wavelet transform was used to extract edge feature ,and then watershed morphology was used to form multi-threshold grayscale contours. The research laid emphasis upon the homological tissue image fusion based on extended Bayesian algorithm ,which fusion result images of linear weighted algorithm was used to compare with the ones of extended Bayesian algorithm. The final fusion images are shown in Fig 5.The final image evaluation was made by information entropy,information correlativity and statistics methods. It is indicated that this method has more advantages for clinical application.

  4. Fusion

    Science.gov (United States)

    Herman, Robin

    1990-10-01

    The book abounds with fascinating anecdotes about fusion's rocky path: the spurious claim by Argentine dictator Juan Peron in 1951 that his country had built a working fusion reactor, the rush by the United States to drop secrecy and publicize its fusion work as a propaganda offensive after the Russian success with Sputnik; the fortune Penthouse magazine publisher Bob Guccione sank into an unconventional fusion device, the skepticism that met an assertion by two University of Utah chemists in 1989 that they had created "cold fusion" in a bottle. Aimed at a general audience, the book describes the scientific basis of controlled fusion--the fusing of atomic nuclei, under conditions hotter than the sun, to release energy. Using personal recollections of scientists involved, it traces the history of this little-known international race that began during the Cold War in secret laboratories in the United States, Great Britain and the Soviet Union, and evolved into an astonishingly open collaboration between East and West.

  5. Hyoid bone fusion and bone density across the lifespan: prediction of age and sex.

    Science.gov (United States)

    Fisher, Ellie; Austin, Diane; Werner, Helen M; Chuang, Ying Ji; Bersu, Edward; Vorperian, Houri K

    2016-06-01

    The hyoid bone supports the important functions of swallowing and speech. At birth, the hyoid bone consists of a central body and pairs of right and left lesser and greater cornua. Fusion of the greater cornua with the body normally occurs in adulthood, but may not occur at all in some individuals. The aim of this study was to quantify hyoid bone fusion across the lifespan, as well as assess developmental changes in hyoid bone density. Using a computed tomography imaging studies database, 136 hyoid bones (66 male, 70 female, ages 1-to-94) were examined. Fusion was ranked on each side and hyoid bones were classified into one of four fusion categories based on their bilateral ranks: bilateral distant non-fusion, bilateral non-fusion, partial or unilateral fusion, and bilateral fusion. Three-dimensional hyoid bone models were created and used to calculate bone density in Hounsfield units. Results showed a wide range of variability in the timing and degree of hyoid bone fusion, with a trend for bilateral non-fusion to decrease after age 20. Hyoid bone density was significantly lower in adult female scans than adult male scans and decreased with age in adulthood. In sex and age estimation models, bone density was a significant predictor of sex. Both fusion category and bone density were significant predictors of age group for adult females. This study provides a developmental baseline for understanding hyoid bone fusion and bone density in typically developing individuals. Findings have implications for the disciplines of forensics, anatomy, speech pathology, and anthropology.

  6. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  7. An Image Fusion Method Based on NSCT and Dual-channel PCNN Model

    OpenAIRE

    Nianyi Wang; Yide Ma; Weilan Wang; Shijie Zhou

    2014-01-01

    NSCT is one of useful multiscale geometric analysis tools, which takes full advantage of geometric regularity of image intrinsic structures. The dual-channel PCNN is a simplified PCNN model, which can process multiple images by a single PCNN. This saves time in the process of image fusion and cuts down computational complexity. In this paper, we present a new image fusion scheme based on NSCT and dual-channel PCNN. Firstly, the fusion rules of subband coefficients of NSCT are discussed. For t...

  8. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks.

    Science.gov (United States)

    Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B

    2013-03-01

    Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.

  9. Sensor data fusion to predict multiple soil properties

    NARCIS (Netherlands)

    Mahmood, H.S.; Hoogmoed, W.B.; Henten, van E.J.

    2012-01-01

    The accuracy of a single sensor is often low because all proximal soil sensors respond to more than one soil property of interest. Sensor data fusion can potentially overcome this inability of a single sensor and can best extract useful and complementary information from multiple sensors or sources.

  10. Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2017-01-01

    Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

  11. Multi-sensor radiation detection, imaging, and fusion

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, Kai [Department of Nuclear Engineering, University of California, Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-01-01

    Glenn Knoll was one of the leaders in the field of radiation detection and measurements and shaped this field through his outstanding scientific and technical contributions, as a teacher, his personality, and his textbook. His Radiation Detection and Measurement book guided me in my studies and is now the textbook in my classes in the Department of Nuclear Engineering at UC Berkeley. In the spirit of Glenn, I will provide an overview of our activities at the Berkeley Applied Nuclear Physics program reflecting some of the breadth of radiation detection technologies and their applications ranging from fundamental studies in physics to biomedical imaging and to nuclear security. I will conclude with a discussion of our Berkeley Radwatch and Resilient Communities activities as a result of the events at the Dai-ichi nuclear power plant in Fukushima, Japan more than 4 years ago. - Highlights: • .Electron-tracking based gamma-ray momentum reconstruction. • .3D volumetric and 3D scene fusion gamma-ray imaging. • .Nuclear Street View integrates and associates nuclear radiation features with specific objects in the environment. • Institute for Resilient Communities combines science, education, and communities to minimize impact of disastrous events.

  12. A review of multivariate methods in brain imaging data fusion

    Science.gov (United States)

    Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.

    2010-03-01

    On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.

  13. Investigation of Bias in Continuous Medical Image Label Fusion.

    Science.gov (United States)

    Xing, Fangxu; Prince, Jerry L; Landman, Bennett A

    2016-01-01

    Image labeling is essential for analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms, both of which suffer from errors. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm for both discrete-valued and continuous-valued labels has been proposed to find the consensus fusion while simultaneously estimating rater performance. In this paper, we first show that the previously reported continuous STAPLE in which bias and variance are used to represent rater performance yields a maximum likelihood solution in which bias is indeterminate. We then analyze the major cause of the deficiency and evaluate two classes of auxiliary bias estimation processes, one that estimates the bias as part of the algorithm initialization and the other that uses a maximum a posteriori criterion with a priori probabilities on the rater bias. We compare the efficacy of six methods, three variants from each class, in simulations and through empirical human rater experiments. We comment on their properties, identify deficient methods, and propose effective methods as solution.

  14. Technological value of SPECT/CT fusion imaging for the diagnosis of lower gastrointestinal bleeding.

    Science.gov (United States)

    Wang, Z G; Zhang, G X; Hao, S H; Zhang, W W; Zhang, T; Zhang, Z P; Wu, R X

    2015-11-24

    The aim of this study was to assess the clinical value of diagnosing and locating lower gastrointestinal (GI) bleeding using single photon emission computed tomography (SPECT)/computed tomography (CT) fusion imaging with 99mTc labeled red blood cells ((99m)Tc-RBC). Fifty-six patients with suspected lower GI bleeding received a preoperative intravenous injection of (99m)Tc-RBC and each underwent planar, SPECT/CT imaging of the lower abdominal region. The location and path of lower GI bleeding were diagnosed by contrastive analysis of planar and SPECT/CT fusion imaging. Among the 56 patients selected, there were abnormalities in concentrated radionuclide activity with planar imaging in 50 patients and in SPECT/CT fusion imaging in 52 patients. Moreover, bleeding points that were coincident with the surgical results were evident with planar imaging in 31 patients and with SPECT/CT fusion imaging in 48 patients. The diagnostic sensitivity of planar imaging and SPECT/CT fusion imaging were 89.3% (50/56) and 92.9% (52/56), respectively, and the difference was not statistically significant (χ(2) = 0.11, P > 0.05). The corresponding positional accuracy values were 73.8% (31/42) and 92.3% (48/52), and the difference was statistically significant (χ(2) = 4.63, P CT fusion imaging is an effective, simple, and accurate method that can be used for diagnosing and locating lower GI bleeding.

  15. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer's disease

    Science.gov (United States)

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-07-01

    Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  16. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja [Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, Uttar Pradesh 226028 (India); Bao, Le Nguyen [Duytan University, Danang 550000 (Viet Nam); Lay-Ekuakille, Aimé [Department of Innovation Engineering, University of Salento, Lecce 73100 (Italy); Le, Dac-Nhuong, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn [Duytan University, Danang 550000 (Viet Nam); Haiphong University, Haiphong 180000 (Viet Nam)

    2016-07-15

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  17. A Bidimensional Empirical Mode Decomposition Method for Fusion of Multispectral and Panchromatic Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Weihua Dong

    2014-09-01

    Full Text Available This article focuses on the image fusion of high-resolution panchromatic and multispectral images. We propose a new image fusion method based on a Hue-Saturation-Value (HSV color space model and bidimensional empirical mode decomposition (BEMD, by integrating high-frequency component of panchromatic image into multispectral image and optimizing the BEMD in decreasing sifting time, simplifying extrema point locating and more efficient interpolation. This new method has been tested with a panchromatic image (SPOT, 10-m resolution and a multispectral image (TM, 28-m resolution. Visual and quantitative assessment methods are applied to evaluate the quality of the fused images. The experimental results show that the proposed method provided superior performance over conventional fusion algorithms in improving the quality of the fused images in terms of visual effectiveness, standard deviation, correlation coefficient, bias index and degree of distortion. Both five different land cover types WorldView-II images and three different sensor combinations (TM/SPOT, WorldView-II, 0.5 m/1 m resolution and IKONOS, 1 m/4 m resolution validated the robustness of BEMD fusion performance. Both of these results prove the capability of the proposed BEMD method as a robust image fusion method to prevent color distortion and enhance image detail.

  18. Two-Dimensional Image Fusion of Planar Bone Scintigraphy and Radiographs in Patients with Clinical Scaphoid Fracture: An Imaging Study

    DEFF Research Database (Denmark)

    Henriksen, O.M.; Lonsdale, M.N.; Jensen, T.D.

    2008-01-01

    experienced nuclear medicine physicians. In addition to the diagnosis, the degree of diagnostic confidence was scored in each case. Results: The addition of fusion images changed the interpretation of each of the three observers in seven, four, and two cases, respectively, reducing the number of positive....... Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. Purpose: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation...... in patients with suspected scaphoid fracture. Material and Methods: In 24 consecutive patients with suspected scaphoid fracture, a standard planar bone scintigraphy of both hands was supplemented with fusion imaging of the injured wrist. Standard and fusion images were evaluated independently by three...

  19. Two-dimensional fusion imaging of planar bone scintigraphy and radiographs in patients with clinical scaphoid fracture: an imaging study

    DEFF Research Database (Denmark)

    Henriksen, Otto Mølby; Lonsdale, Markus Georg; Jensen, T D

    2009-01-01

    experienced nuclear medicine physicians. In addition to the diagnosis, the degree of diagnostic confidence was scored in each case. RESULTS: The addition of fusion images changed the interpretation of each of the three observers in seven, four, and two cases, respectively, reducing the number of positive....... Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. PURPOSE: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation...... in patients with suspected scaphoid fracture. MATERIAL AND METHODS: In 24 consecutive patients with suspected scaphoid fracture, a standard planar bone scintigraphy of both hands was supplemented with fusion imaging of the injured wrist. Standard and fusion images were evaluated independently by three...

  20. Two-Dimensional Image Fusion of Planar Bone Scintigraphy and Radiographs in Patients with Clinical Scaphoid Fracture: An Imaging Study

    DEFF Research Database (Denmark)

    Henriksen, O.M.; Lonsdale, M.N.; Jensen, T.D.

    2008-01-01

    experienced nuclear medicine physicians. In addition to the diagnosis, the degree of diagnostic confidence was scored in each case. Results: The addition of fusion images changed the interpretation of each of the three observers in seven, four, and two cases, respectively, reducing the number of positive....... Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. Purpose: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation...... in patients with suspected scaphoid fracture. Material and Methods: In 24 consecutive patients with suspected scaphoid fracture, a standard planar bone scintigraphy of both hands was supplemented with fusion imaging of the injured wrist. Standard and fusion images were evaluated independently by three...

  1. Feature-based fusion of infrared and visible dynamic images using target detection

    Institute of Scientific and Technical Information of China (English)

    Congyi Liu; Zhongliang Jing; Gang Xiao; Bo Yang

    2007-01-01

    We employ the target detection to improve the performance of the feature-based fusion of infrared and visible dynamic images, which forms a novel fusion scheme. First, the target detection is used to segment the source image sequences into target and background regions. Then, the dual-tree complex wavelet transform (DT-CWT) is proposed to decompose all the source image sequences. Different fusion rules are applied respectively in target and background regions to preserve the target information as much as possible. Real world infrared and visible image sequences are used to validate the performance of the proposed novel scheme. Compared with the previous fusion approaches of image sequences, the improvements of shift invariance, temporal stability and consistency, and computation cost are all ensured.

  2. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    Science.gov (United States)

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  3. Improved Multidimensional Color Image Fusion Based on the Multi-Wavelets

    Directory of Open Access Journals (Sweden)

    T.S. Anand

    2013-06-01

    Full Text Available Image fusion refers to the process of combining the visual information present in two or more images into a single high information content image. This study proposes the concept of fusing the Multi-dimensional images using the YCbCr color model based on the Multi-Wavelet Transform (MWT. Initially the source images namely the visible, Infra Red (IR and Ultra Violet (UV images are transformed from RGB color model to YCbCr color space and then MWT is applied to the Y, Cb and Cr components of the respective images. Finally the transform coefficients obtained are fused using the different fusion techniques. The performance of the color image fusion process is analyzed using the performance measures-Entropy (H, Peak Signal to Noise Ratio (PSNR, Root Mean Square Error (RMSE and Correlation Coefficient (CC.

  4. Adaptive polarization image fusion based on regional energy dynamic weighted average

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-qiang; PAN Quan; ZHANG Hong-cai

    2005-01-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  5. Multiple Input to Multiple Output Images Fusion Based on Turbo Iteration

    Directory of Open Access Journals (Sweden)

    Chu He

    2010-01-01

    Full Text Available This paper mainly addresses the problem of multipolar Synthetic Aperture Radar (SAR and colorful optical images fusion by regarding them as multichannel images. Based on traditional wavelet-based and model-based fusion algorithms, the paper proposes a multi-channel image fusion algorithm based on a multi-multiturbo iterative method. Multi-multiframe is proposed to represent the original image information with multiple outputs from better information-separating viewpoints, and turbo iterative balances wavelet-based and model-based fusion. The approach is designed in this manner. First, Intensity-Hue-Saturation (IHS transformation is applied to the SAR and optical images. Then, different fusion processes are used on corresponding components. Fusion based on multi-multi and turbo iterative is applied to the Intensity component whereas weighted fusion is applied to Hue and Saturation components. To get the final result, inverse IHS transformation is applied. Experimental results show that the proposed algorithm performs effectively in preserving useful complementary information between optical and SAR images.

  6. Multiple Input to Multiple Output Images Fusion Based on Turbo Iteration

    Directory of Open Access Journals (Sweden)

    He Chu

    2010-01-01

    Full Text Available Abstract This paper mainly addresses the problem of multipolar Synthetic Aperture Radar (SAR and colorful optical images fusion by regarding them as multichannel images. Based on traditional wavelet-based and model-based fusion algorithms, the paper proposes a multi-channel image fusion algorithm based on a multi-multiturbo iterative method. Multi-multiframe is proposed to represent the original image information with multiple outputs from better information-separating viewpoints, and turbo iterative balances wavelet-based and model-based fusion. The approach is designed in this manner. First, Intensity-Hue-Saturation (IHS transformation is applied to the SAR and optical images. Then, different fusion processes are used on corresponding components. Fusion based on multi-multi and turbo iterative is applied to the Intensity component whereas weighted fusion is applied to Hue and Saturation components. To get the final result, inverse IHS transformation is applied. Experimental results show that the proposed algorithm performs effectively in preserving useful complementary information between optical and SAR images.

  7. Prediction of Gas Emission Based on Information Fusion and Chaotic Time Series

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to make more exact predictions of gas emissions, information fusion and chaos time series are combined to predict the amount of gas emission in pits. First, a multi-sensor information fusion frame is established. The frame includes a data level, a character level and a decision level. Functions at every level are interpreted in detail in this paper. Then, the process of information fusion for gas emission is introduced. On the basis of those data processed at the data and character levels, the chaos time series and neural network are combined to predict the amount of gas emission at the decision level. The weights of the neural network are gained by training not by manual setting, in order to avoid subjectivity introduced by human intervention. Finally, the experimental results were analyzed in Matlab 6.0 and prove that the method is more accurate in the prediction of the amount of gas emission than the traditional method.

  8. Three-Dimensional Image Fusion of SPECT and CT Scans for Locating Sentinel Lymph Nodes in Malignant Melanomas

    Directory of Open Access Journals (Sweden)

    Michiko Akiyama

    2011-03-01

    Full Text Available Image fusion software can derive a fusion image from single photon emission computed tomography and computed tomography scans. We applied a three-dimensional fusion image to detect sentinel lymph nodes (SLNs in 3 patients with malignant melanomas of the lumbar, vulvar and head region, respectively. During each operation, we detected SLNs at the expected site, as indicated by the fusion images. The three-dimensional image fusion could thus be confirmed as a simple and helpful method for precisely localizing SLNs in these patients.

  9. SAR and Oblique Aerial Optical Image Fusion for Urban Area Image Segmentation

    Science.gov (United States)

    Fagir, J.; Schubert, A.; Frioud, M.; Henke, D.

    2017-05-01

    The fusion of synthetic aperture radar (SAR) and optical data is a dynamic research area, but image segmentation is rarely treated. While a few studies use low-resolution nadir-view optical images, we approached the segmentation of SAR and optical images acquired from the same airborne platform - leading to an oblique view with high resolution and thus increased complexity. To overcome the geometric differences, we generated a digital surface model (DSM) from adjacent optical images and used it to project both the DSM and SAR data into the optical camera frame, followed by segmentation with each channel. The fused segmentation algorithm was found to out-perform the single-channel version.

  10. Modality prediction of biomedical literature images using multimodal feature representation

    Directory of Open Access Journals (Sweden)

    Pelka, Obioma

    2016-08-01

    Full Text Available This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.

  11. Log-Gabor energy based multimodal medical image fusion in NSCT domain.

    Science.gov (United States)

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.

  12. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-01-01

    Full Text Available Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT, the fast discrete curvelet transform (FDCT, and the dual tree complex wavelet transform (DTCWT based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.

  13. Image fusion using wavelet transform and its application to asymmetric cryptosystem and hiding.

    Science.gov (United States)

    Mehra, Isha; Nishchal, Naveen K

    2014-03-10

    Image fusion is a popular method which provides better quality fused image for interpreting the image data. In this paper, color image fusion using wavelet transform is applied for securing data through asymmetric encryption scheme and image hiding. The components of a color image corresponding to different wavelengths (red, green, and blue) are fused together using discrete wavelet transform for obtaining a better quality retrieved color image. The fused color components are encrypted using amplitude- and phase-truncation approach in Fresnel transform domain. Also, the individual color components are transformed into different cover images in order to result disguising information of input image to an attacker. Asymmetric keys, Fresnel propagation parameters, weighing factor, and three cover images provide enlarged key space and hence enhanced security. Computer simulation results support the idea of the proposed fused color image encryption scheme.

  14. Feature-Based Image Fusion with a Uniform Discrete Curvelet Transform

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2013-05-01

    Full Text Available The uniform discrete curvelet transform (UDCT is a novel tool for multiscale representations with several desirable properties compared to previous representation methods. A novel algorithm based on UDCT is proposed for the fusion of multi‐source images. A novel fusion rule for different subband coefficients obtained by UDCT decomposition is discussed in detail. Low‐pass subband coefficients are merged to develop a fusion rule based on a feature similarity (FSIM index. High‐pass directional subband coefficients are merged for a fusion rule based on a complex coefficients feature similarity (CCFSIM index. Experimental results demonstrate that the proposed algorithm fuses all of the useful information from source images without introducing artefacts. Compared with several state‐of‐the‐art fusion methods, it yields a better performance and achieves higher efficiency.

  15. 3D mapping of buried underworld infrastructure using dynamic Bayesian network based multi-sensory image data fusion

    Science.gov (United States)

    Dutta, Ritaban; Cohn, Anthony G.; Muggleton, Jen M.

    2013-05-01

    The successful operation of buried infrastructure within urban environments is fundamental to the conservation of modern living standards. In this paper a novel multi-sensor image fusion framework has been proposed and investigated using dynamic Bayesian network for automatic detection of buried underworld infrastructure. Experimental multi-sensors images were acquired for a known buried plastic water pipe using Vibro-acoustic sensor based location methods and Ground Penetrating Radar imaging system. Computationally intelligent conventional image processing techniques were used to process three types of sensory images. Independently extracted depth and location information from different images regarding the target pipe were fused together using dynamic Bayesian network to predict the maximum probable location and depth of the pipe. The outcome from this study was very encouraging as it was able to detect the target pipe with high accuracy compared with the currently existing pipe survey map. The approach was also applied successfully to produce a best probable 3D buried asset map.

  16. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    OpenAIRE

    Guocheng Yang; Meiling Li; Leiting Chen; Jie Yu

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are devel...

  17. Research on MR-SVD based visual and infrared image fusion

    Science.gov (United States)

    Song, Yajun; Xiao, Junbo; Yang, Jinbao; Chai, Zhi; Wu, Yuanliang

    2016-10-01

    Transform domain based visual and infrared image fusion method is an important research direction. All kinds of natural images could not be expressed effectively by wavelet transform with only one kind of wavelet basis functions due to the high redundancies of its linear and curve singularity expression. Multi-resolution singular value decomposition (MR-SVD) computed the transformation matrix from the original image. With the computed transformation matrix, the original image is decomposed to unrelated "smooth" and the "detail" components. On each layer of the smooth components, the singular value decomposition (SVD) is used to replace the wavelet filter, realizing the multi-level decomposition. A novel visual and infrared image fusion algorithm is presented because of the better sparsity and adaptability of multi-resolution singular value decomposition (MR-SVD), which could resolve the difficult problem of wavelet function basis selection for different kind of visual and infrared images. The same transformation matrixes computed from original visual or infrared imagery used to decompose the original images with MR-SVD, which could reduce the blurring problem of fusion image got by the average transformation matrixes. Then, cycle spinning is employed to remove the artifacts in the fusion image. experimental results according to both the subjective and objective criteria, including the average, standard deviation and average MI, indicate that the proposed method could get better fusion results compared to methods like wavelet transform.

  18. Lossless predictive coding for images with Bayesian treatment.

    Science.gov (United States)

    Liu, Jing; Zhai, Guangtao; Yang, Xiaokang; Chen, Li

    2014-12-01

    Adaptive predictor has long been used for lossless predictive coding of images. Most of existing lossless predictive coding techniques mainly focus on suitability of prediction model for training set with the underlying assumption of local consistency, which may not hold well on object boundaries and cause large predictive error. In this paper, we propose a novel approach based on the assumption that local consistency and patch redundancy exist simultaneously in natural images. We derive a family of linear models and design a new algorithm to automatically select one suitable model for prediction. From the Bayesian perspective, the model with maximum posterior probability is considered as the best. Two types of model evidence are included in our algorithm. One is traditional training evidence, which represents the models’ suitability for current pixel under the assumption of local consistency. The other is target evidence, which is proposed to express the preference for different models from the perspective of patch redundancy. It is shown that the fusion of training evidence and target evidence jointly exploits the benefits of local consistency and patch redundancy. As a result, our proposed predictor is more suitable for natural images with textures and object boundaries. Comprehensive experiments demonstrate that the proposed predictor achieves higher efficiency compared with the state-of-the-art lossless predictors.

  19. Image fusion using MIM software via picture archiving and communication system

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The preliminary studies of the multimodality image registration andfusion were performed using an image fusion software and a picture archiving andcommunication system (PACS) to explore the methodology. Original image voluminaldata were acquired with a CT scanner, MR and dual-head coincidence SPECT, respec-tively. The data sets from all imaging devices were queried, retrieved, transferred andaccessed via DICOM PACS. The image fusion was performed at thc SPECT ICONwork-station, where the MIM (Medical Image Merge) fusion software was installed.The images were created by reslicing original volume on the fly. The image volumeswere aligned by translation and rotation of these view ports with respect to the origi-nal volume orientation. The transparency factor and contrast were adjusted in orderthat both volumes can be visualized in the merged images. The image volume data ofCT, MR and nuclear medicine were transferred, accessed and loaded via PACS suc-cessfully. The perfect fused images of chest CT/18F-FDG and brain MR/SPECT wereobtained. These results showed that image fusion technique using PACS was feasibleand practical. Further experimentation and larger validation studies were needed toexplore the full potential of the clinical use.

  20. A curvelet transform approach for the fusion of MR and CT images

    Science.gov (United States)

    Ali, F. E.; El-Dokany, I. M.; Saad, A. A.; Abd El-Samie, F. E.

    2010-02-01

    There are several medical imaging techniques such as the magnetic resonance (MR) and the computed tomography (CT) techniques. Both techniques give sophisticated characteristics of the region to be imaged. This paper proposes a curvelet based approach for fusing MR and CT images to obtain images with as much detail as possible, for the sake of medical diagnosis. This approach is based on the application of the additive wavelet transform (AWT) on both images and the segmentation of their detail planes into small overlapping tiles. The ridgelet transform is then applied on each of these tiles, and the fusion process is performed on the ridgelet transforms of the tiles. Simulation results show the superiority of the proposed curvelet fusion approach to the traditional fusion techniques like the multiresolution discrete wavelet transform (DWT) technique and the principal component analysis (PCA) technique. The fusion of MR and CT images in the presence of noise is also studied and the results reveal that unlike the DWT fusion technique, the proposed curvelet fusion approach doesn't require denoising.

  1. MR Brain Real Images Segmentation Based Modalities Fusion and Estimation Et Maximization Approach

    Directory of Open Access Journals (Sweden)

    ASSAS Ouarda

    2016-01-01

    Full Text Available With the development of acquisition image techniques, more data coming from different sources of image become available. Multi-modality image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single modality. The main aim of this work is to improve cerebral IRM real images segmentation by fusion of modalities (T1, T2 and DP using estimation et maximizatio Approach (EM. The evaluation of adopted approaches was compared using four criteria which are: the standard deviation (STD, entropy of information (IE, the coefficient of correlation (CC and the space frequency (SF. The experimental results on MRI brain real images prove that the adopted scenarios of fusion approaches are more accurate and robust than the standard EM approach

  2. Multi-window visual saliency extraction for fusion of visible and infrared images

    Science.gov (United States)

    Zhao, Jufeng; Gao, Xiumin; Chen, Yueting; Feng, Huajun; Wang, Daodang

    2016-05-01

    Fusion for visible and infrared images aims to combine the source images of the same scene into a single image with more feature information and better visual performance. In this paper, the authors propose a fusion method based on multi-window visual saliency extraction for visible and infrared images. To extract feature information from infrared and visible images, we design local-window-based frequency-tuned method. With this idea, visual saliency maps are calculated for variable feature information under different local window. These maps show the weights of people's attention upon images for each pixel and region. Enhanced fusion is done using simple weight combination way. Compared with the classical and state-of-the-art approaches, the experimental results demonstrate the proposed approach runs efficiently and performs better than other methods, especially in visual performance and details enhancement.

  3. Enhanced Singular Value Decomposition based Fusion for Super Resolution Image Reconstruction

    Directory of Open Access Journals (Sweden)

    K. Joseph Abraham Sundar

    2015-11-01

    Full Text Available The singular value decomposition (SVD plays a very important role in the field of image processing for applications such as feature extraction, image compression, etc. The main objective is to enhance the resolution of the image based on Singular Value Decomposition. The original image and the subsequent sub-pixel shifted image, subjected to image registration is transferred to SVD domain. An enhanced method of choosing the singular values from the SVD domain images to reconstruct a high resolution image using fusion techniques is proposesed. This technique is called as enhanced SVD based fusion. Significant improvement in the performance is observed by applying enhanced SVD method preceding the various interpolation methods which are incorporated. The technique has high advantage and computationally fast which is most needed for satellite imaging, high definition television broadcasting, medical imaging diagnosis, military surveillance, remote sensing etc.

  4. Opto-acoustic image fusion technology for diagnostic breast imaging in a feasibility study

    Science.gov (United States)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Ulissey, Michael; Stavros, A. T.; Oraevsky, Alexander; Lavin, Philip; Kist, Kenneth; Dornbluth, N. C.; Otto, Pamela

    2015-03-01

    Functional opto-acoustic (OA) imaging was fused with gray-scale ultrasound acquired using a specialized duplex handheld probe. Feasibility Study findings indicated the potential to more accurately characterize breast masses for cancer than conventional diagnostic ultrasound (CDU). The Feasibility Study included OA imagery of 74 breast masses that were collected using the investigational Imagio® breast imaging system. Superior specificity and equal sensitivity to CDU was demonstrated, suggesting that OA fusion imaging may potentially obviate the need for negative biopsies without missing cancers in a certain percentage of breast masses. Preliminary results from a 100 subject Pilot Study are also discussed. A larger Pivotal Study (n=2,097 subjects) is underway to confirm the Feasibility Study and Pilot Study findings.

  5. Interferometer predictions with triangulated images

    DEFF Research Database (Denmark)

    Brinch, Christian; Dullemond, C. P.

    2014-01-01

    Interferometers play an increasingly important role for spatially resolved observations. If employed at full potential, interferometry can probe an enormous dynamic range in spatial scale. Interpretation of the observed visibilities requires the numerical computation of Fourier integrals over...... the synthetic model images. To get the correct values of these integrals, the model images must have the right size and resolution. Insufficient care in these choices can lead to wrong results. We present a new general-purpose scheme for the computation of visibilities of radiative transfer images. Our method...... requires a model image that is a list of intensities at arbitrarily placed positions on the image-plane. It creates a triangulated grid from these vertices, and assumes that the intensity inside each triangle of the grid is a linear function. The Fourier integral over each triangle is then evaluated...

  6. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    Science.gov (United States)

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  7. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    Directory of Open Access Journals (Sweden)

    Xuming Zhang

    2016-09-01

    Full Text Available Multimodal medical image fusion (MIF plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  8. SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation

    Directory of Open Access Journals (Sweden)

    Wu Yiquan

    2017-08-01

    Full Text Available To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the KSingular Value Decomposition (K-SVD method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.

  9. A Framework for Satellite Image Enhancement Using Quantum Genetic and Weighted IHS+Wavelet Fusion Method

    Directory of Open Access Journals (Sweden)

    Amal A. HAMED

    2016-04-01

    Full Text Available this paper examined the applicability of quantum genetic algorithms to solve optimization problems posed by satellite image enhancement techniques, particularly super-resolution, and fusion. We introduce a framework starting from reconstructing the higher-resolution panchromatic image by using the subpixel-shifts between a set of lower-resolution images (registration, then interpolation, restoration, till using the higher-resolution image in pan-sharpening a multispectral image by weighted IHS+Wavelet fusion technique. For successful super-resolution, accurate image registration should be achieved by optimal estimation of subpixel-shifts. Optimal-parameters blind restoration and interpolation should be performed for the optimal quality higher-resolution image. There is a trade-off between spatial and spectral enhancement in image fusion; it is difficult for the existing methods to do the best in both aspects. The objective here is to achieve all combined requirements with optimal fusion weights, and use the parameters constraints to direct the optimization process. QGA is used to estimate the optimal parameters needed for each mathematic model in this framework “Super-resolution and fusion.” The simulation results show that the QGA-based method can be used successfully to estimate automatically the approaching parameters which need the maximal accuracy, and achieve higher quality and efficient convergence rate more than the corresponding conventional GA-based and the classic computational methods.

  10. Adaptive high-frequency information fusion algorithm of radar and optical images

    Science.gov (United States)

    Wang, Yiding; Qin, Shuai

    2011-12-01

    An adaptive High-frequency Information Fusion Algorithm of Radar and Optical Images is proposed in this paper, in order to improve the resolution of the radar image and reserve more radar information. Firstly, Hough Transform is adopted in the process of low-resolution radar image and high-resolution optical image registration. The implicit linear information is extracted from two different heterogeneous images for better result. Then NSCT transform is used for decomposition and fusion. In different decomposition layers or in the same layer with different directions, fusion rules are adaptive for the high-frequency information of images. The ratio values of high frequency information entropy, variance, gradient and edge strength are calculated after NSCT decomposition. High frequency information entropy, variance, gradient or edge strength, which has the smallest ratio value, is selected as an optimal rule for regional fusion. High-frequency information of radar image could be better retained, at the same time the low-frequency information of optical image also could be remained. Experimental results showed that our approach performs better than those methods with single fusion rule.

  11. Wavelet-Based Digital Image Fusion on Reconfigurable FPGA Using Handel-C Language

    Directory of Open Access Journals (Sweden)

    Dr. G. Mohan

    2013-07-01

    Full Text Available Field Programmable Gate Array (FPGA technology has become a viable target for the implementation of real time algorithms in different fusion methods have been proposed mainly in the fields of remote sensing and computer vision. Image fusion is basically a process where multiple images (more than one are combined to form a single resultant fused image. This fused image is more productive as compared to its original input images. In most paper image fusion algorithms were implemented in simulation level only. In this paper Wavelet based Image fusion algorithm is employed and implemented on a Field-Programmable-Gate-Array-based hardware system using a Xilinx Platform Studio EDK 11.1 FPGA Spartan 3E is implemented. The FPGA technologies offer basic digital blocks with flexible interconnections to achieve high speed digital hardware realization. The FPGA consists of a system of logic blocks, such as Look up Tables, gates, or flip-flops and some amount of memory. The algorithm will be transferred from computer to FPGA board using JTAG cable. In this proposed work algorithm is developed by using Handle –C language to performing wavelet based image fusion. The result will be transferred back to system to analyze hardware resource taken by FPGA

  12. Medical Image Fusion Algorithm based on Local Average Energy-Motivated PCNN in NSCT Domain

    OpenAIRE

    Huda Ahmed; Emad N. Hassan; Amr A. Badr

    2016-01-01

    Medical Image Fusion (MIF) can improve the performance of medical diagnosis, treatment planning and image-guided surgery significantly through providing high-quality and rich-information medical images. Traditional MIF techniques suffer from common drawbacks such as: contrast reduction, edge blurring and image degradation. Pulse-coupled Neural Network (PCNN) based MIF techniques outperform the traditional methods in providing high-quality fused images due to its global coupling and pulse sync...

  13. Live-cell imaging of conidial fusion in the bean pathogen, Colletotrichum lindemuthianum.

    Science.gov (United States)

    Ishikawa, Francine H; Souza, Elaine A; Read, Nick D; Roca, M Gabriela

    2010-01-01

    Fusion of conidia and conidial germlings by means of conidial anastomosis tubes (CATs) is a common phenomenon in filamentous fungi, including many plant pathogens. It has a number of different roles, and has been speculated to facilitate parasexual recombination and horizontal gene transfer between species. The bean pathogen Colletotrichum lindemuthianum naturally undergoes CAT fusion on the host surface and within asexual fruiting bodies in anthracnose lesions on its host. It has not been previously possible to analyze the whole process of CAT fusion in this or any other pathogen using live-cell imaging techniques. Here we report the development of a robust protocol for doing this with C. lindemuthianum in vitro. The percentage of conidial germination and CAT fusion was found to be dependent on culture age, media and the fungal strain used. Increased CAT fusion was correlated with reduced germ tube formation. We show time-lapse imaging of the whole process of CAT fusion in C. lindemuthianum for the first time and monitored nuclear migration through fused CATs using nuclei labelled with GFP. CAT fusion in this pathogen was found to exhibit significant differences to that in the model system Neurospora crassa. In contrast to N. crassa, CAT fusion in C. lindemuthianum is inhibited by nutrients (it only occurs in water) and the process takes considerably longer. Copyright © 2009 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  14. Region-based fusion of infrared and visible images using nonsubsampled contourlet transform

    Institute of Scientific and Technical Information of China (English)

    Baolong Guo; Qiang Zhang; Ye Hou

    2008-01-01

    With the nonsubsampled contourlet transform (NSCT), a novel region-segmentation-based fusion algorithm for infrared (IR) and visible images is presented.The IR image is segmented according to the physical features of the target.The source images are decomposed by the NSCT, and then, different fusion rules for the target regions and the background regions are employed to merge the NSCT coefficients respectively.Finally, the fused image is obtained by applying the inverse NSCT.Experimental results show that the proposed algorithm outperforms the pixel-based methods, including the traditional wavelet-based method and NSCT-based method.

  15. Fusion between Satellite and Geophysical images in the study of Archaeological Sites

    Science.gov (United States)

    Karamitrou, A. A.; Tsokas, G. N.; Petrou, M.; Maggidis, C.

    2012-12-01

    In this work various image fusion techniques are used between one satellite (Quickbird) and one geophysical (electric resistivity) image to create various combinations with higher information content than the two original images independently. The resultant images provide more information about possible buried archaeological relics. The examined archaeological area is located in mainland Greece near the city of Boetia at the acropolis of Gla. The acropolis was built on a flat-topped bedrock outcrop at the north-eastern edge of the Kopais basin. When Kopais was filled with water, Glas was emerging as an island. At the end of 14th century the two palaces of Thebes and Orchomenos jointly utilized a large scale engineering project in order to transform the Kopais basin into a fertile plain. They used the acropolis to monitor the project, and as a warehouse to storage the harvest. To examine the Acropolis for potential archaeological remnants we use one Quickbird satellite image that covers the surrounding area of Gla. The satellite image includes one panchromatic (8532x8528 pixels) and one multispectral (2133x2132 pixels) image, collected on 30th of August 2011, covering an area of 20 square kilometers. On the other hand, geophysical measurements were performed using the electric resistivity method to the south west part of the Acropolis. To combine these images we investigate mean-value fusion, wavelets fusion, and curvelet fusion. In the cases of wavelet and curvelet fusion we apply as the fusion criterion the maximum frequency rule. Furthermore, the two original images, and excavations near the area suggest that the dominant orientations of the buried features are north-south and east-west. Therefore, in curvelet fusion method, in curvelet domain we enhance the image details along these specific orientations, additionally to the fusion. The resultant fused images succeed to map linear and rectangular features that were not easily visible in the original images

  16. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  17. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    Science.gov (United States)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  18. Dictionary learning method for joint sparse representation-based image fusion

    Science.gov (United States)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  19. ETS Gene Fusions as Predictive Biomarkers of Resistance to Radiation Therapy for Prostate Cancer

    Science.gov (United States)

    2016-05-01

    Award  Number:    W81XWH-10-1-0582 TITLE:       ETS Gene Fusions as Predictive Biomarkers of Resistance to Radiation Therapy for Prostate Cancer...5a.  CONTRACT  NUMBER   ETS Gene Fusions as Predictive Biomarkers of Resistance to Radiation Therapy for Prostate Cancer 5b.  GRANT  NUMBER   W81XWH...SUPPLEMENTARY  NOTES 14. ABSTRACT The  research  goals  of  this  grant  proposal  are  to:  1)  investigate  the  effect  of   ETS  gene  fusions  on  radiation

  20. Sensor fusion and nonlinear prediction for anomalous event detection

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, J.V.; Moore, K.R.; Elphic, R.C.

    1995-03-07

    The authors consider the problem of using the information from various time series, each one characterizing a different physical quantity, to predict the future state of the system and, based on that information, to detect and classify anomalous events. They stress the application of principal components analysis (PCA) to analyze and combine data from different sensors. They construct both linear and nonlinear predictors. In particular, for linear prediction the authors use the least-mean-square (LMS) algorithm and for nonlinear prediction they use both backpropagation (BP) networks and fuzzy predictors (FP). As an application, they consider the prediction of gamma counts from past values of electron and gamma counts recorded by the instruments of a high altitude satellite.

  1. Enhancement display of veins distribution based on binocular vision and image fusion technology

    Science.gov (United States)

    Liu, Peng; Di, Si; Jin, Jian; Bai, Liping

    2014-11-01

    The capture and display of veins distribution is an important issue for some applications, such as medical diagnosis and identification. Therefore, it has become a popular topic in the field of biomedical imaging. Usually, people capture the veins distribution by infrared imaging, but the display result is similar with that of a gray picture and the color and details of skin cannot be remained. To some degree, it is unreal for doctors. In this paper, we develop a binocular vision system to carry out the enhancement display of veins under the condition of keeping actual skin color. The binocular system is consisted of two adjacent cameras. A visible band filter and an infrared band filter are placed in front of the two lenses, respectively. Therefore, the pictures of visible band and infrared band can be captured simultaneously. After that, a new fusion process is applied to the two pictures, which related to histogram mapping, principal component analysis (PCA) and modified bilateral filter fusion. The final results show that both the veins distribution and the actual skin color of the back of the hand can be clearly displayed. Besides, correlation coefficient, average gradient and average distortion are selected as the parameters to evaluate the image quality. By comparing the parameters, it is evident that our novel fusion method is prior to some popular fusion methods such as Gauss filter fusion, Intensity-hue-saturation (HIS) fusion and bilateral filter fusion.

  2. Dual Channel Pulse Coupled Neural Network Algorithm for Fusion of Multimodality Brain Images with Quality Analysis

    Directory of Open Access Journals (Sweden)

    Kavitha SRINIVASAN

    2014-09-01

    Full Text Available Background: In the review of medical imaging techniques, an important fact that emerged is that radiologists and physicians still are in a need of high-resolution medical images with complementary information from different modalities to ensure efficient analysis. This requirement should have been sorted out using fusion techniques with the fused image being used in image-guided surgery, image-guided radiotherapy and non-invasive diagnosis. Aim: This paper focuses on Dual Channel Pulse Coupled Neural Network (PCNN Algorithm for fusion of multimodality brain images and the fused image is further analyzed using subjective (human perception and objective (statistical measures for the quality analysis. Material and Methods: The modalities used in fusion are CT, MRI with subtypes T1/T2/PD/GAD, PET and SPECT, since the information from each modality is complementary to one another. The objective measures selected for evaluation of fused image were: Information Entropy (IE - image quality, Mutual Information (MI – deviation in fused to the source images and Signal to Noise Ratio (SNR – noise level, for analysis. Eight sets of brain images with different modalities (T2 with T1, T2 with CT, PD with T2, PD with GAD, T2 with GAD, T2 with SPECT-Tc, T2 with SPECT-Ti, T2 with PET are chosen for experimental purpose and the proposed technique is compared with existing fusion methods such as the Average method, the Contrast pyramid, the Shift Invariant Discrete Wavelet Transform (SIDWT with Harr and the Morphological pyramid, using the selected measures to ascertain relative performance. Results: The IE value and SNR value of the fused image derived from dual channel PCNN is higher than other fusion methods, shows that the quality is better with less noise. Conclusion: The fused image resulting from the proposed method retains the contrast, shape and texture as in source images without false information or information loss.

  3. Analyzer-based imaging of spinal fusion in an animal model

    Science.gov (United States)

    Kelly, M. E.; Beavis, R. C.; Fiorella, David; Schültke, E.; Allen, L. A.; Juurlink, B. H.; Zhong, Z.; Chapman, L. D.

    2008-05-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs.

  4. Image Fusion Based on Nonsubsampled Contourlet Transform and Saliency-Motivated Pulse Coupled Neural Networks

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2013-01-01

    Full Text Available In the nonsubsampled contourlet transform (NSCT domain, a novel image fusion algorithm based on the visual attention model and pulse coupled neural networks (PCNNs is proposed. For the fusion of high-pass subbands in NSCT domain, a saliency-motivated PCNN model is proposed. The main idea is that high-pass subband coefficients are combined with their visual saliency maps as input to motivate PCNN. Coefficients with large firing times are employed as the fused high-pass subband coefficients. Low-pass subband coefficients are merged to develop a weighted fusion rule based on firing times of PCNN. The fused image contains abundant detailed contents from source images and preserves effectively the saliency structure while enhancing the image contrast. The algorithm can preserve the completeness and the sharpness of object regions. The fused image is more natural and can satisfy the requirement of human visual system (HVS. Experiments demonstrate that the proposed algorithm yields better performance.

  5. Color fusion of SAR and FLIR images using a natural color transfer technique

    Institute of Scientific and Technical Information of China (English)

    Shaoyuan Sun; Zhongliang Jing; Zhenhua Li; Gang Liu

    2005-01-01

    Fusion of synthetic aperture radar (SAR) and forward looking infrared (FLIR) images is an important subject for aerospace and sensor surveillance. This paper presents a scheme to achieve a natural color image based on the contours feature of SAR and the target region feature of FLIR so that the overall scene recognition and situational awareness can be improved. The SAR and FLIR images are first decomposed into steerable pyramids, and the contour maps in the SAR image and the region maps in the FLIR image are calculated. The contour and region features are fused at each level of the steerable pyramids. A color image is then formed by transferring daytime color to the monochromic image by using the natural color transfer technique. Experimental results show that the proposed method is effective in providing a color fusion of SAR and FLIR images.

  6. Multi-modal image fusion based on ROI and Laplacian Pyramid

    Science.gov (United States)

    Gao, Xiong; Zhang, Hong; Chen, Hao; Li, Jiafeng

    2015-03-01

    In this paper, we propose a region of interest-based (ROI-adaptive) fusion algorithm of infrared and visible images by using the Laplacian Pyramid method. Firstly, we estimate the saliency map of infrared images, and then divide the infrared image into two parts: the regions of interest (RoI) and the regions of non-interest (nRoI), by normalizing the saliency map. Visible images are also segmented into two parts by using the Gauss High-pass filter: the regions of high frequency (RoH) and the regions of low frequency (RoL). Secondly, we down-sampled both the nRoI of infrared image and the RoL of visible image as the input of next level processing. Finally, we use normalized saliency map of infrared images as the weighted coefficient to get the basic image on the top level and choose max gray value of the RoI of infrared image and the RoH of visible image to get the detail image. In this way, our method can keep target feature of infrared image and texture detail information of visual image at the same time. Experiment results show that such fusion scheme performs better than the other fusion algorithms both on human visual system and quantitative metrics.

  7. Gallium-67 scintigraphy in lymphoma: is there a benefit of image fusion with computed tomography?

    Energy Technology Data Exchange (ETDEWEB)

    Chajari, M' Hammed; Chesnay, Eric; Batalla, Alain; Bardet, Stephane [Service de Medecine Nucleaire, Centre Francois Baclesse, Caen (France); Lacroix, Joelle [Service de Radiologie, Centre Francois Baclesse, Caen (France); Peny, Anne-Marie; Delcambre, Corinne; Genot, Jean-Yves; Fruchard, C. [Service d' Hematologie-Cancerologie, Centre Francois Baclesse, Caen (France); Henry-Amar, Michel [Service de Recherche Clinique, Centre Francois Baclesse, Caen (France)

    2002-03-01

    We investigated whether use of CT/{sup 67}Ga SPET fusion imaging could help in the interpretation of {sup 67}Ga scintigraphy. From November 1999 to May 2001, 52 consecutive fusion studies were performed in 38 patients [22 patients with Hodgkin's disease (HD) and 16 patients with non-Hodgkin's lymphoma (NHL)] as part of pre-treatment staging (n=13), treatment evaluation (n=20) or evaluation of suspected recurrence (n=19). {sup 67}Ga scintigraphy was carried out 2 and 6 days following the injection of 185-220 MBq {sup 67}Ga citrate. On day 2, {sup 67}Ga SPET and CT were performed, focussing on the chest and/or the abdomen/pelvis. Data from each imaging method were co-registered using external markers. {sup 67}Ga scintigraphy and CT were initially interpreted independently by nuclear medicine physicians and radiologists. CT/{sup 67}Ga SPET fusion studies were then jointly interpreted and both practitioners indicated when fusion provided additional information in comparison with CT and {sup 67}Ga SPET alone. Image fusion was considered to be of benefit in 12/52 (23%) studies which were performed for initial staging (n=4), treatment evaluation (n=4) or evaluation of suspected recurrence (n=4). In these cases, image fusion allowed either confirmation and/or localisation of pathological gallium uptake (n=10) or detection of lesions not visible on CT scan (n=2). Fusion was relevant for discrimination between osseous lesions and lymph node involvement adjacent to bone, especially in the thoracic and lumbar spine and pelvis. In the abdomen and pelvis, fusion helped to differentiate physiological bowel elimination from abnormal uptake, and assisted in precisely locating uptake in neighbouring viscera of the left hypochondrium, including the spleen, left liver lobe, coeliac area, stomach wall and even the splenic flexure. At the thoracic level, fusion also proved useful for demonstrating clearly the relationships of abnormal foci to the pleura, hepatic dome

  8. Solving the problem of imaging resolution: stochastic multi-scale image fusion

    Science.gov (United States)

    Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril

    2016-04-01

    rocks) and RFBR grant 15-34-20989 (data fusion). References: 1. Karsanina, M.V., Gerke, K.M., Skvortsova, E.B., Mallants, D. Universal spatial correlation functions for describing and reconstructing soil microstructure. PLoS ONE 10(5): e0126515 (2015). 2. Gerke, K.M., Karsanina, M.V., Mallants, D. Universal stochastic multiscale image fusion: an example application for shale rock. Scientific Reports 5: 15880 (2015). 3. Gerke, K.M., Karsanina, M.V., Vasilyev, R.V., Mallants, D. Improving pattern reconstruction using correlation functions computed in directions. Europhys. Lett. 106(6), 66002 (2014). 4. Gerke, K.M., Karsanina, M.V. Improving stochastic reconstructions by weighting correlation functions in an objective function. Europhys. Lett. 111, 56002 (2015).

  9. Helicobacter Pylori infection detection from gastric X-ray images based on feature fusion and decision fusion.

    Science.gov (United States)

    Ishihara, Kenta; Ogawa, Takahiro; Haseyama, Miki

    2017-05-01

    In this paper, a fully automatic method for detection of Helicobacter pylori (H. pylori) infection is presented with the aim of constructing a computer-aided diagnosis (CAD) system. In order to realize a CAD system with good performance for detection of H. pylori infection, we focus on the following characteristic of stomach X-ray examination. The accuracy of X-ray examination differs depending on the symptom of H. pylori infection that is focused on and the position from which X-ray images are taken. Therefore, doctors have to comprehensively assess the symptoms and positions. In order to introduce the idea of doctors' assessment into the CAD system, we newly propose a method for detection of H. pylori infection based on the combined use of feature fusion and decision fusion. As a feature fusion scheme, we adopt Multiple Kernel Learning (MKL). Since MKL can combine several features with determination of their weights, it can represent the differences in symptoms. By constructing an MKL classifier for each position, we can obtain several detection results. Furthermore, we introduce confidence-based decision fusion, which can consider the relationship between the classifier's performance and the detection results. Consequently, accurate detection of H. pylori infection becomes possible by the proposed method. Experimental results obtained by applying the proposed method to real X-ray images show that our method has good performance, close to the results of detection by specialists, and indicate that the realization of a CAD system for determining the risk of H. pylori infection is possible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Image Restoration Using Functional and Anatomical Information Fusion with Application to SPECT-MRI Images

    Directory of Open Access Journals (Sweden)

    S. Benameur

    2009-01-01

    Full Text Available Image restoration is usually viewed as an ill-posed problem in image processing, since there is no unique solution associated with it. The quality of restored image closely depends on the constraints imposed of the characteristics of the solution. In this paper, we propose an original extension of the NAS-RIF restoration technique by using information fusion as prior information with application in SPECT medical imaging. That extension allows the restoration process to be constrained by efficiently incorporating, within the NAS-RIF method, a regularization term which stabilizes the inverse solution. Our restoration method is constrained by anatomical information extracted from a high resolution anatomical procedure such as magnetic resonance imaging (MRI. This structural anatomy-based regularization term uses the result of an unsupervised Markovian segmentation obtained after a preliminary registration step between the MRI and SPECT data volumes from each patient. This method was successfully tested on 30 pairs of brain MRI and SPECT acquisitions from different subjects and on Hoffman and Jaszczak SPECT phantoms. The experiments demonstrated that the method performs better, in terms of signal-to-noise ratio, than a classical supervised restoration approach using a Metz filter.

  11. Magnetic Resonance Imaging-Ultrasound Fusion-Guided Prostate Biopsy: Review of Technology, Techniques, and Outcomes.

    Science.gov (United States)

    Kongnyuy, Michael; George, Arvin K; Rastinehad, Ardeshir R; Pinto, Peter A

    2016-04-01

    Transrectal ultrasound (TRUS)-guided (12-14 core) systematic biopsy of the prostate is the recommended standard for patients with suspicion of prostate cancer (PCa). Advances in imaging have led to the application of magnetic resonance imaging (MRI) for the detection of PCa with subsequent development of software-based co-registration allowing for the integration of MRI with real-time TRUS during prostate biopsy. A number of fusion-guided methods and platforms are now commercially available with common elements in image and analysis and planning. Implementation of fusion-guided prostate biopsy has now been proven to improve the detection of clinically significant PCa in appropriately selected patients.

  12. Processing and fusion of passively acquired, millimeter and terahertz images of the human body

    Science.gov (United States)

    Tian, Li; Shen, Yanchun; Jin, Weiqi; Zhao, Guozhong; Cai, Yi

    2017-04-01

    A passive, millimeter wave (MMW) and terahertz (THz) dual-band imaging system composed of 94 and 250 GHz single-element detectors was used to investigate preprocessing and fusion algorithms for dual-band images. Subsequently, an MMW and THz image preprocessing and fusion integrated algorithm (MMW-THz IPFIA) was developed. In the algorithm, a block-matching and three-dimensional filtering denoising algorithm is employed to filter noise, an adaptive histogram equalization algorithm to enhance images, an intensity-based registration algorithm to register images, and a wavelet-based image fusion algorithm to fuse the preprocessed images. The performance of the algorithm was analyzed by calculating the SNR and information entropy of the actual images. This algorithm effectively reduces the image noise and improves the level of detail in the images. Since the algorithm improves the performance of the investigated imaging system, it should support practical technological applications. Because the system responds to blackbody radiation, its improvement is quantified herein using the static performance parameter commonly employed for thermal imaging systems, namely, the minimum detectable temperature difference (MDTD). An experiment was conducted in which the system's MDTD was measured before and after applying the MMW-THz IPFIA, verifying the improved performance that can be realized through its application.

  13. A new image fusion technology based on object extraction and NSCT

    Science.gov (United States)

    Xing, Suxia; Liu, Peng

    In this effort, we proposed an new image fusion technique, utilizing Renyi entropy's object extraction and Non-Subsampled Contourlet Transform (NSCT), for improved visible effect of the image. NSCT is a multiscale transform method, it is a shift-invariant, linear phase, ``true" two-dimensional transform that can decomposes an image into any directional sub-images to capture the intrinsic geometrical structure. In this paper we decompose visible image into 21, 22, and 23 directional sub-images at three different level respectively. Image enhancement is performed at the decomposition level and fused. Renyi entropy is a generalized information entropy. Infrared image can be divided into two parts of the object and the background through the maximum value of Renyi entropy. Image fusion is performed after NSCT and Renyi entropy. The fused image has significantly improved brightness and higher contrast than other images. In order to evaluate the proposed method, information entropy (IE), standard deviation (STD), spatial frequency (SF) and mutual information (MI) are adopted to compare with Laplace, wavelet, and NSCT et al. Results are shown that all evaluation value of the proposed method is higher than that of other methods, and it is a better image fusion method.

  14. Medical Image Fusion Algorithm based on Local Average Energy-Motivated PCNN in NSCT Domain

    Directory of Open Access Journals (Sweden)

    Huda Ahmed

    2016-10-01

    Full Text Available Medical Image Fusion (MIF can improve the performance of medical diagnosis, treatment planning and image-guided surgery significantly through providing high-quality and rich-information medical images. Traditional MIF techniques suffer from common drawbacks such as: contrast reduction, edge blurring and image degradation. Pulse-coupled Neural Network (PCNN based MIF techniques outperform the traditional methods in providing high-quality fused images due to its global coupling and pulse synchronization property; however, the selection of significant features that motivate the PCNN is still an open problem and plays a major role in measuring the contribution of each source image into the fused image. In this paper, a medical image fusion algorithm is proposed based on the Non-subsampled Contourlet Transform (NSCT and the Pulse-Coupled Neural Network (PCNN to fuse images from different modalities. Local Average Energy is used to motivate the PCNN due to its ability to capture salient features of the image such as edges, contours and textures. The proposed approach produces a high quality fused image with high contrast and improved content in comparison with other image fusion techniques without loss of significant details on both levels: the visual and the quantitative.

  15. Analysis of Spectral Characteristics Based on Optical Remote Sensing and SAR Image Fusion

    Institute of Scientific and Technical Information of China (English)

    Weiguo LI; Nan JIANG; Guangxiu GE

    2014-01-01

    Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model and effect of ENVISAT/SAR and HJ-1A satel ite multispectral remote sensing images. Based on the ARSIS strat-egy, using the wavelet transform and the Interaction between the Band Structure Model (IBSM), the research progressed the ENVISAT satel ite SAR and the HJ-1A satel ite CCD images wavelet decomposition, and low/high frequency coefficient re-construction, and obtained the fusion images through the inverse wavelet transform. In the light of low and high-frequency images have different characteristics in differ-ent areas, different fusion rules which can enhance the integration process of self-adaptive were taken, with comparisons with the PCA transformation, IHS transfor-mation and other traditional methods by subjective and the corresponding quantita-tive evaluation. Furthermore, the research extracted the bands and NDVI values around the fusion with GPS samples, analyzed and explained the fusion effect. The results showed that the spectral distortion of wavelet fusion, IHS transform, PCA transform images was 0.101 6, 0.326 1 and 1.277 2, respectively and entropy was 14.701 5, 11.899 3 and 13.229 3, respectively, the wavelet fusion is the highest. The method of wavelet maintained good spectral capability, and visual effects while improved the spatial resolution, the information interpretation effect was much better than other two methods.

  16. Uni-modal versus joint segmentation for region-based image fusion

    NARCIS (Netherlands)

    Lewis, J.J.; Nikolov, S.G.; Canagarajah, C.N.; Bull, D.R.; Toet, A.

    2006-01-01

    A number of segmentation techniques are compared with regard to their usefulness for region-based image and video fusion. In order to achieve this, a new multi-sensor data set is introduced containing a variety of infra-red, visible and pixel fused images together with manually produced 'ground

  17. Objective color harmony assessment for visible and infrared color fusion images of typical scenes

    Science.gov (United States)

    Gao, Shaoshu; Jin, Weiqi; Wang, Lingxue

    2012-11-01

    For visible and infrared color fusion images of three typical scenes, color harmony computational models are proposed to evaluate the color quality of fusion images without reference images. The models are established based on the color-combination harmony model and focus on the influence of the color characteristics of typical scenes and the color region sizes in the fusion image. For the influence of the color characteristics of typical scenes, color harmony adjusting factors for natural scene images (green plants, sea, and sky) are defined by measuring the similarity between image colors and corresponding memory colors, and that for town and building images are presented based on the optimum colorfulness range suited for a human. Simultaneously, considering the influence of color region sizes, the weight coefficients are established using areas of the color regions to optimize the color harmony model. Experimental results show that the proposed harmony models are consistent with human perception and that they are suitable to evaluate the color harmony for color fusion images of typical scenes.

  18. Uni-modal versus joint segmentation for region-based image fusion

    NARCIS (Netherlands)

    Lewis, J.J.; Nikolov, S.G.; Canagarajah, C.N.; Bull, D.R.; Toet, A.

    2006-01-01

    A number of segmentation techniques are compared with regard to their usefulness for region-based image and video fusion. In order to achieve this, a new multi-sensor data set is introduced containing a variety of infra-red, visible and pixel fused images together with manually produced 'ground trut

  19. Information fusion in signal and image processing major probabilistic and non-probabilistic numerical approaches

    CERN Document Server

    Bloch, Isabelle

    2010-01-01

    The area of information fusion has grown considerably during the last few years, leading to a rapid and impressive evolution. In such fast-moving times, it is important to take stock of the changes that have occurred. As such, this books offers an overview of the general principles and specificities of information fusion in signal and image processing, as well as covering the main numerical methods (probabilistic approaches, fuzzy sets and possibility theory and belief functions).

  20. Image Fusion Based on Nonsubsampled Contourlet Transform and Saliency-Motivated Pulse Coupled Neural Networks

    OpenAIRE

    Liang Xu; Junping Du; Qingping Li

    2013-01-01

    In the nonsubsampled contourlet transform (NSCT) domain, a novel image fusion algorithm based on the visual attention model and pulse coupled neural networks (PCNNs) is proposed. For the fusion of high-pass subbands in NSCT domain, a saliency-motivated PCNN model is proposed. The main idea is that high-pass subband coefficients are combined with their visual saliency maps as input to motivate PCNN. Coefficients with large firing times are employed as the fused high-pass subband coefficients. ...

  1. Predictive depth coding of wavelet transformed images

    Science.gov (United States)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  2. Multiple image sensor data fusion through artificial neural networks

    Science.gov (United States)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  3. The 'Lumbar Fusion Outcome Score' (LUFOS): a new practical and surgically oriented grading system for preoperative prediction of surgical outcomes after lumbar spinal fusion in patients with degenerative disc disease and refractory chronic axial low back pain.

    Science.gov (United States)

    Mattei, Tobias A; Rehman, Azeem A; Teles, Alisson R; Aldag, Jean C; Dinh, Dzung H; McCall, Todd D

    2017-01-01

    In order to evaluate the predictive effect of non-invasive preoperative imaging methods on surgical outcomes of lumbar fusion for patients with degenerative disc disease (DDD) and refractory chronic axial low back pain (LBP), the authors conducted a retrospective review of 45 patients with DDD and refractory LBP submitted to anterior lumbar interbody fusion (ALIF) at a single center from 2007 to 2010. Surgical outcomes - as measured by Visual Analog Scale (VAS/back pain) and Oswestry Disability Index (ODI) - were evaluated pre-operatively and at 6 weeks, 3 months, 6 months, and 1 year post-operatively. Linear mixed-effects models were generated in order to identify possible preoperative imaging characteristics (including bone scan/99mTc scintigraphy increased endplate uptake, Modic endplate changes, and disc degeneration graded according to Pfirrmann classification) which may be predictive of long-term surgical outcomes . After controlling for confounders, a combined score, the Lumbar Fusion Outcome Score (LUFOS), was developed. The LUFOS grading system was able to stratify patients in two general groups (Non-surgical: LUFOS 0 and 1; Surgical: LUFOS 2 and 3) that presented significantly different surgical outcomes in terms of estimated marginal means of VAS/back pain (p = 0.001) and ODI (p = 0.006) beginning at 3 months and continuing up to 1 year of follow-up. In conclusion,  LUFOS has been devised as a new practical and surgically oriented grading system based on simple key parameters from non-invasive preoperative imaging exams (magnetic resonance imaging/MRI and bone scan/99mTc scintigraphy) which has been shown to be highly predictive of surgical outcomes of patients undergoing lumbar fusion for treatment for refractory chronic axial LBP.

  4. Precision Imaging: more descriptive, predictive and integrative imaging.

    Science.gov (United States)

    Frangi, Alejandro F; Taylor, Zeike A; Gooya, Ali

    2016-10-01

    Medical image analysis has grown into a matured field challenged by progress made across all medical imaging technologies and more recent breakthroughs in biological imaging. The cross-fertilisation between medical image analysis, biomedical imaging physics and technology, and domain knowledge from medicine and biology has spurred a truly interdisciplinary effort that stretched outside the original boundaries of the disciplines that gave birth to this field and created stimulating and enriching synergies. Consideration on how the field has evolved and the experience of the work carried out over the last 15 years in our centre, has led us to envision a future emphasis of medical imaging in Precision Imaging. Precision Imaging is not a new discipline but rather a distinct emphasis in medical imaging borne at the cross-roads between, and unifying the efforts behind mechanistic and phenomenological model-based imaging. It captures three main directions in the effort to deal with the information deluge in imaging sciences, and thus achieve wisdom from data, information, and knowledge. Precision Imaging is finally characterised by being descriptive, predictive and integrative about the imaged object. This paper provides a brief and personal perspective on how the field has evolved, summarises and formalises our vision of Precision Imaging for Precision Medicine, and highlights some connections with past research and current trends in the field.

  5. MR and CT image fusion of the cervical spine: a noninvasive alternative to CT-myelography

    Science.gov (United States)

    Hu, Yangqiu; Mirza, Sohail K.; Jarvik, Jeffrey G.; Heagerty, Patrick J.; Haynor, David R.

    2005-04-01

    CT-Myelography (CTM) is routinely used for planning surgery for degenerative disease of the spine, but its invasive nature, significant potential morbidity, and high costs make a noninvasive substitute desirable. We report our work on evaluating CT and MR image fusion as an alternative to CTM. Because the spine is only piecewise rigid, a multi-rigid approach to the registration of spinal CT and MR images was developed (SPIE 2004), in which the spine on CT images is first segmented into separate vertebrae, each of which is then rigidly registered with the corresponding vertebra on MR images. The results are then blended to obtain fusion images. Since they contain information from both modalities, we hypothesized that fusion images would be equivalent to CTM. To test this we selected 34 patients who had undergone MRI and CTM for degenerative disease of the cervical spine, and used the multi-rigid approach to produce fused images. A clinical vignette for each patient was created and presented along with either CT/MR fusion images or CTM images. A group of spine surgeons are asked to formulate detailed surgical plans based on each set of images, and the surgical plans are compared. A similar study assessing diagnostic agreement is being performed with neuroradiologists, who also assess the accuracy of registration. Our work to date has demonstrated the feasibility of segmentation and multi-rigid fusion in clinical cases and the acceptability of the questionnaire to physicians. Preliminary analysis of one surgeon's and one neuroradiologist"s evaluation has been performed.

  6. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    Science.gov (United States)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  7. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    Science.gov (United States)

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  8. Multi-spectral image fusion method based on two channels non-separable wavelets

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; PENG JiaXiong

    2008-01-01

    A construction method of two channels non-separable wavelets filter bank which dilation matrix is [1, 1; 1, -1] and its application in the fusion of multi-spectral image are presented. Many 4x4 filter banks are designed. The multi-spectral image fusion algorithm based on this kind of wavelet is proposed. Using this filter bank, multi-resolution wavelet decomposition of the intensity of multi-spectral image and panchromatic image is performed, and the two low-frequency components of the intensity and the panchromatic image are merged by using a tradeoff parameter. The experiment results show that this method is good in the preservation of spectral quality and high spatial resolution information. Its performance in preserving spectral quality and high spatial information is better than the fusion method based on DWFT and IHS. When the parameter t is closed to 1, the fused image can obtain rich spectral information from the original MS image. The amount of computation reduced to only half of the fusion method based on four channels wavelet transform.

  9. Remote sensing image fusion based on Gaussian mixture model and multiresolution analysis

    Science.gov (United States)

    Xiao, Moyan; He, Zhibiao

    2013-10-01

    A novel image fusion algorithm based on region segmentation and multiresolution analysis(MRA) is proposed to make full use of advantages of different multiscale transform. Nonsubsampled contourlet transform(NSCT) processes edges better than wavelet transform does. While wavelet transform handles smooth area and singularities better than NSCT does. As an image often includes more than one feature, the proposed method is conducted on the basis of Gaussian mixture model(GMM) based region segmentation. Firstly, transform the multispectral(MS) image into intensity, hue and saturation component. Secondly, segment intensity component into dense contour and smooth regions according to GMM and NSCT. And then gain new intensity component by fusing intensity component and high resolution image with Àtrous wavelet transform(ATWT) fusion in smooth areas and NSCT fusion in dense contour areas. Finally transform the new intensity together with hue component, saturation component back into RGB space and obtain the fused image. Multisource remote sensing images are tested to assess this proposed algorithm. Visual evaluation and statistics analysis are employed to evaluate the quality of fused images of different methods. The proposed improved algorithm demonstrates excellent spectrum information and high resolution. Experiment results show that the new proposed fusion algorithm incorporating with region segmentation based improved GMM and MRA outperforms those algorithms based on single multiscale transform.

  10. Predicting coal ash fusion temperature based on its chemical composition using ACO-BP neural network

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y.P.; Wu, M.G.; Qian, J.X. [Institute of Industrial Control Technology, College of Info Science and Engineering, Zhejiang University, Hangzhou 310027 (China)

    2007-02-15

    Coal ash fusion temperature is important to boiler designers and operators of power plants. Fusion temperature is determined by the chemical composition of coal ash, however, their relationships are not precisely known. A novel neural network, ACO-BP neural network, is used to model coal ash fusion temperature based on its chemical composition. Ant colony optimization (ACO) is an ecological system algorithm, which draws its inspiration from the foraging behavior of real ants. A three-layer network is designed with 10 hidden nodes. The oxide contents consist of the inputs of the network and the fusion temperature is the output. Data on 80 typical Chinese coal ash samples were used for training and testing. Results show that ACO-BP neural network can obtain better performance compared with empirical formulas and BP neural network. The well-trained neural network can be used as a useful tool to predict coal ash fusion temperature according to the oxide contents of the coal ash. (author)

  11. Function and Phenotype prediction through Data and Knowledge Fusion

    KAUST Repository

    Vespoor, Karen

    2016-01-27

    The biomedical literature captures the most current biomedical knowledge and is a tremendously rich resource for research. With over 24 million publications currently indexed in the US National Library of Medicine’s PubMed index, however, it is becoming increasingly challenging for biomedical researchers to keep up with this literature. Automated strategies for extracting information from it are required. Large-scale processing of the literature enables direct biomedical knowledge discovery. In this presentation, I will introduce the use of text mining techniques to support analysis of biological data sets, and will specifically discuss applications in protein function and phenotype prediction, as well as analysis of genetic variants that are supported by analysis of the literature and integration with complementary structured resources.

  12. Reconstruction of quasi-monochromatic images from a multiple monochromatic x-ray imaging diagnostic for inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Izumi, N; Turner, R; Barbee, T; Koch, J; Welser, L; Mansini, R

    2004-04-15

    We have developed a software package for image reconstruction of a multiple monochromatic x-ray imaging diagnostics (MMI) for diagnostic of inertial conferment fusion capsules. The MMI consists of a pinhole array, a multi-layer Bragg mirror, and a charge injection device image detector (CID). The pinhole array projects {approx}500 sub-images onto the CID after reflection off the multi-layer Bragg mirror. The obtained raw images have continuum spectral dispersion on its vertical axis. For systematic analysis, a computer-aided reconstruction of the quasi-monochromatic image is essential.

  13. Anatomical-functional image fusion by information of interest in local Laplacian filtering domain.

    Science.gov (United States)

    Du, Jiao; Li, Weisheng; Xiao, Bin

    2017-08-25

    A novel method for performing anatomical (MRI)-functional (PET or SPECT) image fusion is presented. The method merges specific feature information from input image signals of a single or multiple medical imaging modalities into a single fused image while preserving more information and generating less distortion. The proposed method uses a local Laplacian filtering based technique realized through a novel multi-scale system architecture. Firstly, the input images are generated in a multi-scale image representation and are processed using local Laplacian filtering. Secondly, at each scale, the decomposed images are combined to produce fused approximate images using a local energy maximum scheme and produce the fused residual images using an information of interest-based scheme. Finally, a fused image is obtained using a reconstruction process that is analogous to that of conventional Laplacian pyramid transform. Experimental results computed using individual multi-scale analysis-based decomposition schemes or fusion rules clearly demonstrate the superiority of the proposed method through subjective observation as well as objective metrics. Furthermore, the proposed method can obtain better performance, compared to the state-of-the-art fusion methods.

  14. Two-Dimensional Image Fusion of Planar Bone Scintigraphy and Radiographs in Patients with Clinical Scaphoid Fracture: An Imaging Study

    Energy Technology Data Exchange (ETDEWEB)

    Henriksen, O.M.; Lonsdale, M.N.; Jensen, T.D.; Weikop, K.L.; Holm, O.; Duus, B.; Friberg, L. (Dept. of Clinical Physiology/Nuclear Medicine, Glostrup Hospital, Glostrup (Denmark))

    2009-01-15

    Background: Although magnetic resonance imaging (MRI) is now considered the gold standard in second-line imaging of patients with suspected scaphoid fracture and negative radiographs, bone scintigraphy can be used in patients with pacemakers, metallic implants, or other contraindications to MRI. Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. Purpose: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation in patients with suspected scaphoid fracture. Material and Methods: In 24 consecutive patients with suspected scaphoid fracture, a standard planar bone scintigraphy of both hands was supplemented with fusion imaging of the injured wrist. Standard and fusion images were evaluated independently by three experienced nuclear medicine physicians. In addition to the diagnosis, the degree of diagnostic confidence was scored in each case. Results: The addition of fusion images changed the interpretation of each of the three observers in seven, four, and two cases, respectively, reducing the number of positive interpretations of two of the observers from 11 and nine cases to six and seven cases, respectively. The degree of diagnostic confidence increased significantly in two observers, and interobserver agreement increased in all three pairs of observers from 0.83, 0.57, and 0.73 to 0.89, 0.8, and 0.9, respectively. Conclusion: Image fusion of planar bone scintigrams and radiographs has a significant influence on image interpretation and increases both diagnostic confidence and interobserver agreement

  15. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    Science.gov (United States)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  16. Multi-focus image fusion and robust encryption algorithm based on compressive sensing

    Science.gov (United States)

    Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong

    2017-06-01

    Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.

  17. First downscattered neutron images from Inertial Confinement Fusion experiments at the National Ignition Facility

    Directory of Open Access Journals (Sweden)

    Guler Nevzat

    2013-11-01

    Full Text Available Inertial Confinement Fusion experiments at the National Ignition Facility (NIF are designed to understand and test the basic principles of self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT filled cryogenic plastic (CH capsules. The experimental campaign is ongoing to tune the implosions and characterize the burning plasma conditions. Nuclear diagnostics play an important role in measuring the characteristics of these burning plasmas, providing feedback to improve the implosion dynamics. The Neutron Imaging (NI diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by collecting images at two different energy bands for primary (13–15 MeV and downscattered (10–12 MeV neutrons. From these distributions, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. The first downscattered neutron images from imploding ICF capsules are shown in this paper.

  18. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density.

    Science.gov (United States)

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices.

  19. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    Directory of Open Access Journals (Sweden)

    Guocheng Yang

    2015-01-01

    Full Text Available We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD, as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices.

  20. Fast Fusion of Multi-Band Images Based on Solving a Sylvester Equation.

    Science.gov (United States)

    Wei, Qi; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2015-11-01

    This paper proposes a fast multi-band image fusion algorithm, which combines a high-spatial low-spectral resolution image and a low-spatial high-spectral resolution image. The well admitted forward model is explored to form the likelihoods of the observations. Maximizing the likelihoods leads to solving a Sylvester equation. By exploiting the properties of the circulant and downsampling matrices associated with the fusion problem, a closed-form solution for the corresponding Sylvester equation is obtained explicitly, getting rid of any iterative update step. Coupled with the alternating direction method of multipliers and the block coordinate descent method, the proposed algorithm can be easily generalized to incorporate prior information for the fusion problem, allowing a Bayesian estimator. Simulation results show that the proposed algorithm achieves the same performance as the existing algorithms with the advantage of significantly decreasing the computational complexity of these algorithms.

  1. Assessment of ion kinetic effects in shock-driven inertial confinement fusion implosions using fusion burn imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, M. J., E-mail: mros@lle.rochester.edu; Séguin, F. H.; Rinderknecht, H. G.; Zylstra, A. B.; Li, C. K.; Sio, H.; Johnson, M. Gatu; Frenje, J. A.; Petrasso, R. D. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Amendt, P. A.; Wilks, S. C.; Pino, J. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Atzeni, S. [Dipartimento SBAI, Università di Roma “La Sapienza” and CNISM, Via A. Scarpa 14-16, I-00161 Roma (Italy); Hoffman, N. M.; Kagan, G.; Molvig, K. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Glebov, V. Yu.; Stoeckl, C.; Seka, W.; Marshall, F. J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States); and others

    2015-06-15

    The significance and nature of ion kinetic effects in D{sup 3}He-filled, shock-driven inertial confinement fusion implosions are assessed through measurements of fusion burn profiles. Over this series of experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number, N{sub K}) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatially resolved measurements of the fusion burn are used to examine kinetic ion transport effects in greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison of measured and simulated burn profiles shows that models including ion transport effects are able to better match the experimental results. In implosions characterized by large Knudsen numbers (N{sub K} ∼ 3), the fusion burn profiles predicted by hydrodynamics simulations that exclude ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that includes a model of ion diffusion is able to qualitatively match the measured profile shapes. Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the observed trends, though further refinement of the models is needed for a more complete and quantitative understanding of ion kinetic effects.

  2. Two Levels Fusion Decision for Multispectral Image Pattern Recognition

    Science.gov (United States)

    Elmannai, H.; Loghmari, M. A.; Naceur, M. S.

    2015-10-01

    Major goal of multispectral data analysis is land cover classification and related applications. The dimension drawback leads to a small ratio of the remote sensing training data compared to the number of features. Therefore robust methods should be associated to overcome the dimensionality curse. The presented work proposed a pattern recognition approach. Source separation, feature extraction and decisional fusion are the main stages to establish an automatic pattern recognizer. The first stage is pre-processing and is based on non linear source separation. The mixing process is considered non linear with gaussians distributions. The second stage performs feature extraction for Gabor, Wavelet and Curvelet transform. Feature information presentation provides an efficient information description for machine vision projects. The third stage is a decisional fusion performed in two steps. The first step assign the best feature to each source/pattern using the accuracy matrix obtained from the learning data set. The second step is a source majority vote. Classification is performed by Support Vector Machine. Experimentation results show that the proposed fusion method enhances the classification accuracy and provide powerful tool for pattern recognition.

  3. Predicting operative blood loss during spinal fusion for adolescent idiopathic scoliosis.

    Science.gov (United States)

    Ialenti, Marc N; Lonner, Baron S; Verma, Kushagra; Dean, Laura; Valdevit, Antonio; Errico, Thomas

    2013-06-01

    Patient and surgical factors are known to influence operative blood loss in spinal fusion for adolescent idiopathic scoliosis (AIS), but have only been loosely identified. To date, there are no established recommendations to guide decisions to predonate autologous blood, and the current practice is based primarily on surgeon preference. This study is designed to determine which patient and surgical factors are correlated with, and predictive of, blood loss during spinal fusion for AIS. Retrospective analysis of 340 (81 males, 259 females; mean age, 15.2 y) consecutive AIS patients treated by a single surgeon from 2000 to 2008. Demographic (sex, age, height, weight, and associated comorbidities), laboratory (hematocrit, platelet, PT/PTT/INR), standard radiographic, and perioperative data including complications were analyzed with a linear stepwise regression to develop a predictive model of blood loss. Estimated blood loss was 907±775 mL for posterior spinal fusion (PSF, n=188), 323±171 mL for anterior spinal fusion (ASF, n=124), and 1277±821 mL for combined procedures (n=28). For patients undergoing PSF, stepwise analysis identified sex, preoperative kyphosis, and operative time to be the most important predictors of increased blood loss (Ploss in PSF: blood loss (mL)=C+Op-time (min)×(6.4)-pre-op T2-T12 kyphosis (degrees)×(8.7), C=233 if male and -270 if female. We find sex, operative time, and preoperative kyphosis to be the most important predictors of increased blood loss in PSF for AIS. Mean arterial pressure and operative time were predictive of estimated blood loss in ASF. For posterior fusions, we also present a model that estimates blood loss preoperatively and can be used to guide decisions regarding predonation of blood and the use of antifibrinolytic agents. Retrospective study: Level II.

  4. Enhanced Object Detection via Fusion With Prior Beliefs from Image Classification

    OpenAIRE

    Cao, Yilun; Lee, Hyungtae; Kwon, Heesung

    2016-01-01

    In this paper, we introduce a novel fusion method that can enhance object detection performance by fusing decisions from two different types of computer vision tasks: object detection and image classification. In the proposed work, the class label of an image obtained from image classification is viewed as prior knowledge about existence or non-existence of certain objects. The prior knowledge is then fused with the decisions of object detection to improve detection accuracy by mitigating fal...

  5. An Automatic Registration-Fusion Scheme Based on Similarity Measures: An Application to Dental Imaging

    Science.gov (United States)

    2007-11-02

    images, the specialist can then perform any quantitative comparisons, concerning the evolution of abnormalities (cysts, tooth decay etc.) or healing...images to evaluate the progression of pathological conditions, such as cysts or tooth decays , or healing processes, as well as the assessment of the...calculation of similarity measures between two dental radiographic images to be registered. Moreover, a fusion process has been developed to combine

  6. Environmental impact prediction using remote sensing images

    Institute of Scientific and Technical Information of China (English)

    Pezhman ROUDGARMI; Masoud MONAVARI; Jahangir FEGHHI; Jafar NOURI; Nematollah KHORASANI

    2008-01-01

    Environmental impact prediction is an important step in many environmental studies. Awide variety of methods have been developed in this concern. During this study, remote sensing images were used for environmental impact prediction in Robatkarim area, Iran, during the years of 2005~2007. It was assumed that environmental impact could be predicted using time series satellite imageries. Natural vegetation cover was chosen as a main environmental element and a case study. Environmental impacts of the regional development on natural vegetation of the area were investigated considering the changes occurred on the extent of natural vegetation cover and the amount of biomass. Vegetation data, land use and land cover classes (as activity factors) within several years were prepared using satellite images. The amount ofbiomass was measured by Soil-adjusted Vegetation Index (SAVI) and Normalized Difference Vegetation Index (NDVI) based on satellite images. The resulted biomass estimates were tested by the paired samples t-test method. No significant difference was observed between the average biomass of estimated and control samples at the 5% significance level. Finally, regression models were used for the environmental impacts prediction. All obtained regression models for prediction of impacts on natural vegetation cover show values over 0.9 for both correlation coefficient and R-squared. According to the resulted methodology, the prediction models of projects and plans impacts can also be developed for other environmental elements which may be derived using time series remote sensing images.

  7. A novel super-resolution image fusion algorithm based on improved PCNN and wavelet transform

    Science.gov (United States)

    Liu, Na; Gao, Kun; Song, Yajun; Ni, Guoqiang

    2009-10-01

    Super-resolution reconstruction technology is to explore new information between the under-sampling image series obtained from the same scene and to achieve the high-resolution picture through image fusion in sub-pixel level. The traditional super-resolution fusion methods for sub-sampling images need motion estimation and motion interpolation and construct multi-resolution pyramid to obtain high-resolution, yet the function of the human beings' visual features are ignored. In this paper, a novel resolution reconstruction for under-sampling images of static scene based on the human vision model is considered by introducing PCNN (Pulse Coupled Neural Network) model, which simplifies and improves the input model, internal behavior and control parameters selection. The proposed super-resolution image fusion algorithm based on PCNN-wavelet is aimed at the down-sampling image series in a static scene. And on the basis of keeping the original features, we introduce Relief Filter(RF) to the control and judge segment to overcome the effect of random factors(such as noise, etc) effectively to achieve the aim that highlighting interested object though the fusion. Numerical simulations show that the new algorithm has the better performance in retaining more details and keeping high resolution.

  8. Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm

    Science.gov (United States)

    Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng

    2017-07-01

    A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.

  9. Multi-focus image fusion based on the non-subsampled contourlet transform

    Science.gov (United States)

    Adu, Jianhua; Wang, Minghui; Wu, Zhenya; Zhou, Zhongli

    2012-09-01

    In this paper, a new image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed for the fusion of multi-focus images. The selection of different subband coefficients obtained by the NSCT decomposition is critical to image fusion. So, in this paper, firstly, original images are decomposed into different frequency subband coefficients by NSCT. Secondly, the selection of the low-frequency subband coefficients and the bandpass directional subband coefficients is discussed in detail. For the selection of the low-frequency subband coefficients, the non-negative matrix factorization (NMF) method is adopted. For the selection of bandpass directional subband coefficients, a regional cross-gradient method that selects the coefficients according to the minimum of the regional cross-gradient is proposed. Finally, the fused image is obtained by performing the inverse NSCT on the combined coefficients. The experimental results show that the proposed fusion algorithm can achieve significant results in getting a new image where all parts are sharp.

  10. Infrared and visible image fusion based on visual saliency map and weighted least square optimization

    Science.gov (United States)

    Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua

    2017-05-01

    The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional ;averaging; fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

  11. Fusion of visible and infrared images using global entropy and gradient constrained regularization

    Science.gov (United States)

    Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Zang, Yue; Tao, Shuyin; Wang, Daodang

    2017-03-01

    Infrared and visible image fusion has been an important and popular topic in imaging science. Dual-band image fusion aims to extract both target regions in infrared image and abundant detail information in visible image into fused result, preserving even enhancing the information that inherits from source images. In our study, we propose an optimization-based fusion method by combining global entropy and gradient constrained regularization. We design a cost function by taking the advantages of global maximum entropy as the first term, together with gradient constraint as the regularized term. In this cost function, global maximum entropy could make the fused result inherit as more information as possible from sources. And using gradient constraint, the fused result would have clear details and edges with noise suppression. The fusion is achieved based on the minimization of the cost function by adding weight value matrix. Experimental results indicate that the proposed method performs well and has obvious superiorities over other typical algorithms in both subjective visual performance and objective criteria.

  12. A learning-based similarity fusion and filtering approach for biomedical image retrieval using SVM classification and relevance feedback.

    Science.gov (United States)

    Rahman, Md Mahmudur; Antani, Sameer K; Thoma, George R

    2011-07-01

    This paper presents a classification-driven biomedical image retrieval framework based on image filtering and similarity fusion by employing supervised learning techniques. In this framework, the probabilistic outputs of a multiclass support vector machine (SVM) classifier as category prediction of query and database images are exploited at first to filter out irrelevant images, thereby reducing the search space for similarity matching. Images are classified at a global level according to their modalities based on different low-level, concept, and keypoint-based features. It is difficult to find a unique feature to compare images effectively for all types of queries. Hence, a query-specific adaptive linear combination of similarity matching approach is proposed by relying on the image classification and feedback information from users. Based on the prediction of a query image category, individual precomputed weights of different features are adjusted online. The prediction of the classifier may be inaccurate in some cases and a user might have a different semantic interpretation about retrieved images. Hence, the weights are finally determined by considering both precision and rank order information of each individual feature representation by considering top retrieved relevant images as judged by the users. As a result, the system can adapt itself to individual searches to produce query-specific results. Experiment is performed in a diverse collection of 5 000 biomedical images of different modalities, body parts, and orientations. It demonstrates the efficiency (about half computation time compared to search on entire collection) and effectiveness (about 10%-15% improvement in precision at each recall level) of the retrieval approach.

  13. Coronary CT angiography: IVUS image fusion for quantitative plaque and stenosis analyses

    Science.gov (United States)

    Marquering, Henk A.; Dijkstra, Jouke; Besnehard, Quentin J. A.; Duthé, Julien P. M.; Schuijf, Joanne D.; Bax, Jeroen J.; Reiber, Johan H. C.

    2008-03-01

    Rationale and Objective: Due to the limited temporal and spatial resolution, coronary CT angiographic image quality is not optimal for robust and accurate stenosis quantification, and plaque differentiation and quantification. By combining the high-resolution IVUS images with CT images, a detailed representation of the coronary arteries can be provided in the CT images. Methods: The two vessel data sets are matched using three steps. First, vessel segments are matched using anatomical landmarks. Second, the landmarks are aligned in cross-sectional vessel images. Third, the semi-automatically detected IVUS lumen contours are matched to the CTA data, using manual interaction and automatic registration methods. Results: The IVUS-CTA fusion tool facilitates the unique combined view of the high-resolution IVUS segmentation of the outer vessel wall and lumen-intima transitions on the CT images. The cylindrical projection of the CMPR image decreases the analysis time with 50 percent. The automatic registration of the cross-vessel views decreases the analyses time with 85 percent. Conclusions: The fusion of IVUS images and their segmentation results with coronary CT angiographic images provide a detailed view of the lumen and vessel wall of coronary arteries. The automatic fusion tool makes such a registration feasible for the development and validation of analysis tools.

  14. Generalized TV and sparse decomposition of the ultrasound image deconvolution model based on fusion technology.

    Science.gov (United States)

    Wen, Qiaonong; Wan, Suiren

    2013-01-01

    Ultrasound image deconvolution involves noise reduction and image feature enhancement, denoising need equivalent the low-pass filtering, image feature enhancement is to strengthen the high-frequency parts, these two requirements are often combined together. It is a contradictory requirement that we must be reasonable balance between these two basic requirements. Image deconvolution method of partial differential equation model is the method based on diffusion theory, and sparse decomposition deconvolution is image representation-based method. The mechanisms of these two methods are not the same, effect of these two methods own characteristics. In contourlet transform domain, we combine the strengths of the two deconvolution method together by image fusion, and introduce the entropy of local orientation energy ratio into fusion decision-making, make a different treatment according to the actual situation on the low-frequency part of the coefficients and the high-frequency part of the coefficient. As deconvolution process is inevitably blurred image edge information, we fusion the edge gray-scale image information to the deconvolution results in order to compensate the missing edge information. Experiments show that our method is better than the effect separate of using deconvolution method, and restore part of the image edge information.

  15. CT and MR image fusion scheme in nonsubsampled contourlet transform domain.

    Science.gov (United States)

    Ganasala, Padma; Kumar, Vinod

    2014-06-01

    Fusion of CT and MR images allows simultaneous visualization of details of bony anatomy provided by CT image and details of soft tissue anatomy provided by MR image. This helps the radiologist for the precise diagnosis of disease and for more effective interventional treatment procedures. This paper aims at designing an effective CT and MR image fusion method. In the proposed method, first source images are decomposed by using nonsubsampled contourlet transform (NSCT) which is a shift-invariant, multiresolution and multidirection image decomposition transform. Maximum entropy of square of the coefficients with in a local window is used for low-frequency sub-band coefficient selection. Maximum weighted sum-modified Laplacian is used for high-frequency sub-bands coefficient selection. Finally fused image is obtained through inverse NSCT. CT and MR images of different cases have been used to test the proposed method and results are compared with those of the other conventional image fusion methods. Both visual analysis and quantitative evaluation of experimental results shows the superiority of proposed method as compared to other methods.

  16. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    Science.gov (United States)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  17. Research on image fusion of missile team based on multi-agent cooperative blackboard model

    Science.gov (United States)

    Sen, Guo; Munan, Li

    The target of cooperative engagement of missile teams is to furthest improve hit rate of target according to communication and cooperation among missiles. In this paper the problems of image fusion between missile teams in complex combat environment was analyzed, after which an muti-agent blackboard cooperative model was presented and a public information platform of missile team is built according to this model. Through these, the fusion of images taken from muti-sensor of missiles can be realized and the hit rate of attacking target will be improved. At last, an simulation experiment were performed, and the feasibility of the method is proved by simulation experiment.

  18. Ziyuan-3 Multi-Spectral and Panchromatic Images Fusion Quality Assessment: Applied to Jiangsu Coastal Area, China

    Science.gov (United States)

    Wu, Ruijuan; He, Xiufeng

    2014-11-01

    A comprehensive fusion quality assessment was proposed, which based on cross entropy and structure similarity with weighted value, it was used to evaluate the fusion effort of Chinese Ziyuan-3 multi-spectral and panchromatic images from coastal areas, Jiangsu province, China. Fusion algorithms were used, Hue-Intensity-Saturation (HIS), àtrous Wavelet Transformation (AWT), NonsubSampled Contourlet Transform (NSCT), and combined NSCT with HIS. According to visual interpretation, the quality of fused imaged based on combined NSCT with HIS is better than another fusion methods, fusion quality results exploring our proposed image fusion quality assessment also illustrated that fused image of combined NSCT with HIS is the best, which is consistent with human- being subjective interpretation.

  19. Ziyuan-2 Multi-Spectral and Panchromatic Images Fusion Quality Assessment: Applied to Jiangsu Coastal Area, China

    Science.gov (United States)

    Wu, Ruijuan; He, Xiufeng

    2014-11-01

    A comprehensive fusion quality assessment was proposed, which based on cross entropy and structure similarity with weighted value, it was used to evaluate the fusion effort of Chinese Ziyuan-3 multi-spectral and panchromatic images from coastal areas, Jiangsu province, China. Fusion algorithms were used, Hue-Intensity-Saturation (HIS), à trous Wavelet Transformation (AWT), Nonsub Sampled Contourlet Transform (NSCT), and combined NSCT with HIS. According to visual interpretation, the quality of fused imaged based on combined NSCT with HIS is better than another fusion methods, fusion quality results exploring our proposed image fusion quality assessment also illustrated that fused image of combined NSCT with HIS is the best, which is consistent with human-being subjective interpretation.

  20. IMAGING SPECTROSCOPY AND LIGHT DETECTION AND RANGING DATA FUSION FOR URBAN FEATURES EXTRACTION

    Directory of Open Access Journals (Sweden)

    Mohammed Idrees

    2013-01-01

    Full Text Available This study presents our findings on the fusion of Imaging Spectroscopy (IS and LiDAR data for urban feature extraction. We carried out necessary preprocessing of the hyperspectral image. Minimum Noise Fraction (MNF transforms was used for ordering hyperspectral bands according to their noise. Thereafter, we employed Optimum Index Factor (OIF to statistically select the three most appropriate bands combination from MNF result. The composite image was classified using unsupervised classification (k-mean algorithm and the accuracy of the classification assessed. Digital Surface Model (DSM and LiDAR intensity were generated from the LiDAR point cloud. The LiDAR intensity was filtered to remove the noise. Hue Saturation Intensity (HSI fusion algorithm was used to fuse the imaging spectroscopy and DSM as well as imaging spectroscopy and filtered intensity. The fusion of imaging spectroscopy and DSM was found to be better than that of imaging spectroscopy and LiDAR intensity quantitatively. The three datasets (imaging spectrocopy, DSM and Lidar intensity fused data were classified into four classes: building, pavement, trees and grass using unsupervised classification and the accuracy of the classification assessed. The result of the study shows that fusion of imaging spectroscopy and LiDAR data improved the visual identification of surface features. Also, the classification accuracy improved from an overall accuracy of 84.6% for the imaging spectroscopy data to 90.2% for the DSM fused data. Similarly, the Kappa Coefficient increased from 0.71 to 0.82. on the other hand, classification of the fused LiDAR intensity and imaging spectroscopy data perform poorly quantitatively with overall accuracy of 27.8% and kappa coefficient of 0.0988.

  1. Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform

    Science.gov (United States)

    Wu, Zhi-guo; Wang, Ming-jia; Han, Guang-liang

    2011-08-01

    Being an efficient method of information fusion, image fusion has been used in many fields such as machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing, including segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then, based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover

  2. ADVANCES IN HYPERSPECTRAL AND MULTISPECTRAL IMAGE FUSION AND SPECTRAL UNMIXING

    OpenAIRE

    C. Lanaras; E. Baltsavias; K. Schindler

    2015-01-01

    In this work, we jointly process high spectral and high geometric resolution images and exploit their synergies to (a) generate a fused image of high spectral and geometric resolution; and (b) improve (linear) spectral unmixing of hyperspectral endmembers at subpixel level w.r.t. the pixel size of the hyperspectral image. We assume that the two images are radiometrically corrected and geometrically co-registered. The scientific contributions of this work are (a) a simultaneous approa...

  3. Imaging single retrovirus entry through alternative receptor isoforms and intermediates of virus-endosome fusion.

    Directory of Open Access Journals (Sweden)

    Naveen K Jha

    Full Text Available A large group of viruses rely on low pH to activate their fusion proteins that merge the viral envelope with an endosomal membrane, releasing the viral nucleocapsid. A critical barrier to understanding these events has been the lack of approaches to study virus-cell membrane fusion within acidic endosomes, the natural sites of virus nucleocapsid capsid entry into the cytosol. Here we have investigated these events using the highly tractable subgroup A avian sarcoma and leukosis virus envelope glycoprotein (EnvA-TVA receptor system. Through labeling EnvA pseudotyped viruses with a pH-sensitive fluorescent marker, we imaged their entry into mildly acidic compartments. We found that cells expressing the transmembrane receptor (TVA950 internalized the virus much faster than those expressing the GPI-anchored receptor isoform (TVA800. Surprisingly, TVA800 did not accelerate virus uptake compared to cells lacking the receptor. Subsequent steps of virus entry were visualized by incorporating a small viral content marker that was released into the cytosol as a result of fusion. EnvA-dependent fusion with TVA800-expressing cells occurred shortly after endocytosis and delivery into acidic endosomes, whereas fusion of viruses internalized through TVA950 was delayed. In the latter case, a relatively stable hemifusion-like intermediate preceded the fusion pore opening. The apparent size and stability of nascent fusion pores depended on the TVA isoforms and their expression levels, with TVA950 supporting more robust pores and a higher efficiency of infection compared to TVA800. These results demonstrate that surface receptor density and the intracellular trafficking pathway used are important determinants of efficient EnvA-mediated membrane fusion, and suggest that early fusion intermediates play a critical role in establishing low pH-dependent virus entry from within acidic endosomes.

  4. [Application of data fusion of microscopic spectral imaging in reservoir characterization].

    Science.gov (United States)

    Li, Jing; Zha, Ming; Guo, Yuan-Ling; Chen, Yong

    2011-10-01

    In recent years, spectral imaging technique has been applied widely in mineralogy and petrology. The technique combines the spectral technique with imaging technique. The samples can be analyzed and recognized both in spectra and space by using the technique. However, the problem is how to acquire the needful information from a large number of data of spectral imaging, and how to enhance the needful information. In the present paper, the experimental data were processed by using the technique of data fusion of microscopic spectral imaging. The space distribution map of chemical composition and physical parameters of samples were obtained. The result showed that the distribution of different hydrocarbon in the reservoirs, pore connectivity, etc. were revealed well. The technique of data fusion of microscopic spectral imaging provided a new method for reservoir characterization.

  5. Fusion of structural and functional cardiac magnetic resonance imaging data for studying ventricular fibrillation.

    Science.gov (United States)

    Magtibay, K; Beheshti, M; Foomany, F H; Balasundaram, K; Masse, S; Lai, P; Asta, J; Zamiri, N; Jaffray, D A; Nanthakumar, K; Krishnan, S; Umapathy, K

    2014-01-01

    Magnetic Resonance Imaging (MRI) techniques such as Current Density Imaging (CDI) and Diffusion Tensor Imaging (DTI) provide a complementing set of imaging data that can describe both the functional and structural states of biological tissues. This paper presents a Joint Independent Component Analysis (jICA) based fusion approach which can be utilized to fuse CDI and DTI data to quantify the differences between two cardiac states: Ventricular Fibrillation (VF) and Asystolic/Normal (AS/NM). Such an approach could lead to a better insight on the mechanism of VF. Fusing CDI and DTI data from 8 data sets from 6 beating porcine hearts, in effect, detects the differences between two cardiac states, qualitatively and quantitatively. This initial study demonstrates the applicability of MRI-based imaging techniques and jICA-based fusion approach in studying cardiac arrhythmias.

  6. Night vision image fusion for target detection with improved 2D maximum entropy segmentation

    Science.gov (United States)

    Bai, Lian-fa; Liu, Ying-bin; Yue, Jiang; Zhang, Yi

    2013-08-01

    Infrared and LLL image are used for night vision target detection. In allusion to the characteristics of night vision imaging and lack of traditional detection algorithm for segmentation and extraction of targets, we propose a method of infrared and LLL image fusion for target detection with improved 2D maximum entropy segmentation. Firstly, two-dimensional histogram was improved by gray level and maximum gray level in weighted area, weights were selected to calculate the maximum entropy for infrared and LLL image segmentation by using the histogram. Compared with the traditional maximum entropy segmentation, the algorithm had significant effect in target detection, and the functions of background suppression and target extraction. And then, the validity of multi-dimensional characteristics AND operation on the infrared and LLL image feature level fusion for target detection is verified. Experimental results show that detection algorithm has a relatively good effect and application in target detection and multiple targets detection in complex background.

  7. A novel SAR fusion image segmentation method based on triplet Markov field

    Science.gov (United States)

    Wang, Jiajing; Jiao, Shuhong; Sun, Zhenyu

    2015-03-01

    Markov random field (MRF) has been widely used in SAR image segmentation because of the advantage of directly modeling the posterior distribution and suppresses the speckle on the influence of the segmentation result. However, when the real SAR images are nonstationary images, the unsupervised segmentation results by MRF can be poor. The recent proposed triplet Markov field (TMF) model is well appropriate for nonstationary SAR image processing due to the introduction of an auxiliary field which reflects the nonstationarity. In addition, on account of the texture features of SAR image, a fusion image segmentation method is proposed by fusing the gray level image and texture feature image. The effectiveness of the proposed method in this paper is demonstrated by a synthesis SAR image and the real SAR images segmentation experiments, and it is better than the state-of-art methods.

  8. Hybrid Prediction and Fractal Hyperspectral Image Compression

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available The data size of hyperspectral image is too large for storage and transmission, and it has become a bottleneck restricting its applications. So it is necessary to study a high efficiency compression method for hyperspectral image. Prediction encoding is easy to realize and has been studied widely in the hyperspectral image compression field. Fractal coding has the advantages of high compression ratio, resolution independence, and a fast decoding speed, but its application in the hyperspectral image compression field is not popular. In this paper, we propose a novel algorithm for hyperspectral image compression based on hybrid prediction and fractal. Intraband prediction is implemented to the first band and all the remaining bands are encoded by modified fractal coding algorithm. The proposed algorithm can effectively exploit the spectral correlation in hyperspectral image, since each range block is approximated by the domain block in the adjacent band, which is of the same size as the range block. Experimental results indicate that the proposed algorithm provides very promising performance at low bitrate. Compared to other algorithms, the encoding complexity is lower, the decoding quality has a great enhancement, and the PSNR can be increased by about 5 dB to 10 dB.

  9. Millimeter-wave imaging of magnetic fusion plasmas: technology innovations advancing physics understanding

    Science.gov (United States)

    Wang, Y.; Tobias, B.; Chang, Y.-T.; Yu, J.-H.; Li, M.; Hu, F.; Chen, M.; Mamidanna, M.; Phan, T.; Pham, A.-V.; Gu, J.; Liu, X.; Zhu, Y.; Domier, C. W.; Shi, L.; Valeo, E.; Kramer, G. J.; Kuwahara, D.; Nagayama, Y.; Mase, A.; Luhmann, N. C., Jr.

    2017-07-01

    Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. Microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These have the potential to greatly advance microwave fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfvén eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today’s most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.

  10. A NOVEL ALGORITHM OF MULTI-SENSOR IMAGE FUSION BASED ON WAVELET PACKET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to enhance the image information from multi-sensor and to improve the abilities of theinformation analysis and the feature extraction, this letter proposed a new fusion approach in pixel level bymeans of the Wavelet Packet Transform (WPT). The WPT is able to decompose an image into low frequencyband and high frequency band in higher scale. It offers a more precise method for image analysis than Wave-let Transform (WT). Firstly, the proposed approach employs HIS (Hue, Intensity, Saturation) transform toobtain the intensity component of CBERS (China-Brazil Earth Resource Satellite) multi-spectral image. ThenWPT transform is employed to decompose the intensity component and SPOT (Systeme Pour I'Observationde la Therre ) image into low frequency band and high frequency band in three levels. Next, two high fre-quency coefficients and low frequency coefficients of the images are combined by linear weighting strategies.Finally, the fused image is obtained with inverse WPT and inverse HIS. The results show the new approachcan fuse details of input image successfully, and thereby can obtain a more satisfactory result than that of HM(Histogram Matched)-based fusion algorithm and WT-based fusion approach.

  11. Color Sensitivity Multiple Exposure Fusion using High Dynamic Range Image

    Directory of Open Access Journals (Sweden)

    Varsha Borole

    2014-02-01

    Full Text Available In this paper, we present a high dynamic range imaging (HDRI method using a capturing camera image using normally exposure, over exposure and under exposure. We make three different images from a multiple input image using local histogram stretching. Because the proposed method generated three histogram-stretched images from a multiple input image, ghost artifacts that are the result of the relative motion between the camera and objects during exposure time, are inherently removed. Therefore, the proposed method can be applied to a consumer compact camera to provide the ghost artifacts free HDRI. Experiments with several sets of test images with different exposures show that the proposed method gives a better performance than existing methods in terms of visual results and computation time.

  12. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  13. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  14. Application of remote-sensing-image fusion to the monitoring of mining induced subsidence

    Institute of Scientific and Technical Information of China (English)

    LI Liang-jun; WU Yan-bin

    2008-01-01

    We discuss remote-sensing-image fusion based on a multi-band wavelet and RGB feature fusion method. The fused data can be used to monitor the dynamic evolution of mining induced subsidence. High resolution panchromatic image data and multi-spectral image data were first decomposed with a multi-ary wavelet method. Then the high frequency components of the high resolution image were fused with the features from the R, G, B bands of the multi-spectral image to form a new high frequency component. Then the newly formed high frequency component and the low frequency component were inversely transformed using a multi-ary wavelet method. Finally, color images were formed from the newly formed R, G, B bands. In our experiment we used images with a resolution of 10 m (SPOT), and TM30 images, of the Huainan mining area. These images were fused with a trinary wavelet method. In addition, we used four indexes-entropy, average gradient, wavelet energy and spectral distortion-to assess the new method. The result indicates that this new method can improve the clarity and resolution of the images and also preserves the information from the original images. Using the fused images for monitoring mining induced subsidence achieves a good effect.

  15. Prospective Study for Semantic Inter-Media Fusion in Content-Based Medical Image Retrieval

    CERN Document Server

    Teodorescu, Roxana; Leow, Wee-Kheng; Cretu, Vladimir

    2008-01-01

    One important challenge in modern Content-Based Medical Image Retrieval (CBMIR) approaches is represented by the semantic gap, related to the complexity of the medical knowledge. Among the methods that are able to close this gap in CBMIR, the use of medical thesauri/ontologies has interesting perspectives due to the possibility of accessing on-line updated relevant webservices and to extract real-time medical semantic structured information. The CBMIR approach proposed in this paper uses the Unified Medical Language System's (UMLS) Metathesaurus to perform a semantic indexing and fusion of medical media. This fusion operates before the query processing (retrieval) and works at an UMLS-compliant conceptual indexing level. Our purpose is to study various techniques related to semantic data alignment, preprocessing, fusion, clustering and retrieval, by evaluating the various techniques and highlighting future research directions. The alignment and the preprocessing are based on partial text/image retrieval feedb...

  16. Improving the recognition of fingerprint biometric system using enhanced image fusion

    Science.gov (United States)

    Alsharif, Salim; El-Saba, Aed; Stripathi, Reshma

    2010-04-01

    Fingerprints recognition systems have been widely used by financial institutions, law enforcement, border control, visa issuing, just to mention few. Biometric identifiers can be counterfeited, but considered more reliable and secure compared to traditional ID cards or personal passwords methods. Fingerprint pattern fusion improves the performance of a fingerprint recognition system in terms of accuracy and security. This paper presents digital enhancement and fusion approaches that improve the biometric of the fingerprint recognition system. It is a two-step approach. In the first step raw fingerprint images are enhanced using high-frequency-emphasis filtering (HFEF). The second step is a simple linear fusion process between the raw images and the HFEF ones. It is shown that the proposed approach increases the verification and identification of the fingerprint biometric recognition system, where any improvement is justified using the correlation performance metrics of the matching algorithm.

  17. Implicit beliefs about ideal body image predict body image dissatisfaction

    OpenAIRE

    Heider, Niclas; Spruyt, Adriaan; De Houwer, Jan

    2015-01-01

    We examined whether implicit measures of actual and ideal body image can be used to predict body dissatisfaction in young female adults. Participants completed two Implicit Relational Assessment Procedures (IRAPs) to examine their implicit beliefs concerning actual (e.g., I am thin) and desired ideal body image (e.g., I want to be thin). Body dissatisfaction was examined via self-report questionnaires and rating scales. As expected, differences in body dissatisfaction exerted a differential i...

  18. Implicit Beliefs about Ideal Body Image Predict Body Image Dissatisfaction

    OpenAIRE

    Niclas eHeider; Adriaan eSpruyt; Jan eDe Houwer

    2015-01-01

    We examined whether implicit measures of actual and ideal body image can be used to predict body dissatisfaction in young female adults. Participants completed two Implicit Relational Assessment Procedures (IRAPs) to examine their implicit beliefs concerning actual (e.g., I am thin) and desired ideal body image (e.g., I want to be thin). Body dissatisfaction was examined via self-report questionnaires and rating scales. As expected, differences in body dissatisfaction exerted a differential i...

  19. Quotient Based Multiresolution Image Fusion of Thermal and Visual Images Using Daubechies Wavelet Transform for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper investigates the multiresolution level-1 and level-2 Quotient based Fusion of thermal and visual images. In the proposed system, the method-1 namely "Decompose then Quotient Fuse Level-1" and the method-2 namely "Decompose-Reconstruct then Quotient Fuse Level-2" both work on wavelet transformations of the visual and thermal face images. The wavelet transform is well-suited to manage different image resolution and allows the image decomposition in different kinds of coefficients, while preserving the image information without any loss. This approach is based on a definition of an illumination invariant signature image which enables an analytic generation of the image space with varying illumination. The quotient fused images are passed through Principal Component Analysis (PCA) for dimension reduction and then those images are classified using a multi-layer perceptron (MLP). The performances of both the methods have been evaluated using OTCBVS and IRIS databases. All the different classes have been ...

  20. DSA Image Fusion Based on Dynamic Fuzzy Logic and Curvelet Entropy

    Directory of Open Access Journals (Sweden)

    Guangming Zhang

    2009-06-01

    Full Text Available The curvelet transform as a multiscale transform has directional parameters occurs at all scales, locations, and orientations. It is superior to wavelet transform in image processing domain. This paper analyzes the characters of DSA medical image, and proposes a novel approach for DSA medical image fusion, which is using curvelet information entropy and dynamic fuzzy logic. Firstly, the image was decomposed by curvelet transform to obtain the different level information. Then the entropy from different level of DSA medical image was calculated, and a membership function based on dynamic fuzzy logic was constructed to adjust the weight for image subbands coefficients via entropy. At last an inverse curvelet transform was applied to reconstruct the image to synthesize one DSA medical image which could contain more integrated accurate detail information of blood vessels than any one of the individual source images. By compare, the efficiency of our method is better than weighted average, laplacian pyramid and traditional wavelet transform method.

  1. CT-MR image data fusion for computer-assisted navigated surgery of orbital tumors

    Energy Technology Data Exchange (ETDEWEB)

    Nemec, Stefan Franz [Department of Radiology/Division of Neuroradiology and Musculoskeletal Radiology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria)], E-mail: stefan.nemec@meduniwien.ac.at; Peloschek, Philipp; Schmook, Maria Theresa; Krestan, Christian Robert [Department of Radiology/Division of Neuroradiology and Musculoskeletal Radiology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Hauff, Wolfgang [Department of Ophthalmology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Matula, Christian [Department of Neurosurgery, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Czerny, Christian [Department of Radiology/Division of Neuroradiology and Musculoskeletal Radiology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria)

    2010-02-15

    Purpose: To demonstrate the value of multidetector computed tomography (MDCT) and magnetic resonance imaging (MRI) in the preoperative assessment of orbital tumors, and to present, particularly, CT and MR image data fusion for surgical planning and performance in computer-assisted navigated surgery of orbital tumors. Materials and methods: In this retrospective case series, 10 patients with orbital tumors and associated complaints underwent MDCT and MRI of the orbit. MDCT was performed at high resolution, with a bone window level setting in the axial plane. MRI was performed with an axial 3D T1-weighted (w) gradient-echo (GE) contrast-enhanced sequence, in addition to a standard MRI protocol. First, MDCT and MR images were used to diagnose tumorous lesions compared to histology as a standard of reference. Then, the image data sets from CT and 3D T1-w GE sequences were merged on a workstation to create CT-MR fusion images that were used for interventional planning and intraoperative image guidance. The intraoperative accuracy of the navigation unit was measured, defined as the deviation between the same landmark in the navigation image and the patient. Furthermore, the clinical preoperative status was compared to the patients' postoperative outcome. Results: Radiological and histological diagnosis, which revealed 7 benign and 3 malignant tumors, were concordant in 7 of 10 cases (70%). The CT-MR fusion images supported the surgeon in the preoperative planning and improved the surgical performance. The mean intraoperative accuracy of the navigation unit was 1.35 mm. Postoperatively, orbital complaints showed complete regression in 6 cases, were ameliorated notably in 3 cases, and remained unchanged in 1 case. Conclusion: CT and MRI are essential for the preoperative assessment of orbital tumors. CT-MR image data fusion is an accurate tool for planning the correct surgical procedure, and can improve surgical results in computer-assisted navigated surgery of

  2. Fusion Imaging in the Diagnosis of Cancer; La imagen del fusion en el diagnostico del cancer

    Energy Technology Data Exchange (ETDEWEB)

    Maldonado, A.; Gonzalez Alenda, J.

    2007-07-01

    Early diagnosis is one of the most important aids in the fight against cancer. Of the tests available in Medicine, anatomic imaging techniques such as Computed Tomography (CT)and Magnetic Resonance Imaging (MRI) have been the ones used for many years. the emergence of Positron Emission Tomography (PET) more than a decade ago was a major breakthrough in the early diagnosis of malignant lesions, as it was based on tumor metabolism and not on anatomy. The merger of both techniques into one thanks to PET-CT cameras has made this technology the most important tool in the management of cancer patients. (Author)

  3. Processing and fusion for human body terahertz dual-band passive image

    Science.gov (United States)

    Tian, Li; Shen, Yanchun; Jin, Weiqi; Zhao, Guozhong; Cai, Yi

    2016-11-01

    Compared with microwave, THz has higher resolution, and compared with infrared, THz has better penetrability. Human body radiate THz also, its photon energy is low, it is harmless to human body. So THz has great potential applications in the body searching system. Dual-band images may contain different information for the same scene, so THz dual-band imaging have been a significant research subject of THz technology. Base on the dual-band THz passive imaging system which is composed of a 94GHz and a 250GHz cell detector, this paper researched the preprocessing and fusion algorithm for THz dual-band images. Firstly, THz images have such problems: large noise, low SNR, low contrast, low details. Secondly, the stability problem of the optical mechanical scanning system makes the images less repetitive, obvious stripes and low definition. Aiming at these situations, this paper used the BM3D de-noising algorithm to filter noise and correct the scanning problem. Furthermore, translation, rotation and scaling exist between the two images, after registered by the intensity-base registration algorithm, and enhanced by the adaptive histogram equalization algorithm, the images are fused by image fusion algorithm based on wavelet. This effectively reduced the image noise, scan distortion and matching error, improved the details, enhanced the contrast. It is helpful to improve the detection efficiency of hidden objects too. Method in this paper has a substantial effect for improving the dual-band THz passive imaging system's performance and promoting technology practical.

  4. Enhancement of out-of-focus images using fusion-based PSF estimation and restoration

    Science.gov (United States)

    Yoon, Joonshik; Shin, Jeong-Ho; Paik, Joon-Ki

    2000-12-01

    In this paper, we propose an enhancement algorithm of out-of- focused images using fusion-based Point-spread-function (PSF) estimation and restoration. The proposed algorithm can make in-focused image by using only digital image processing techniques, and it requires neither infrared light/ultrasound nor focusing lens assembly operated by electrically powered movement of focusing lens. In order to increase accuracy in estimating the PSF of the defocus image, the proposed algorithm finds true and linear edges by using Canny edge detector, which is optimal edge detector and has good localization, estimates the step response across the edge for each pixel, computes the one-dimensional step response by averaging the step responses, estimates the two-dimensional PSF from the averaged step response, and then provides in- focused image by image restoration filter based on the estimated PSF. Finally, we execute fusion process, which can enhance the quality of the fused image by fusing restored images. There is a limit of the amount of out-of-focus, which can be recovered by the proposed algorithm. Moreover, the proposed algorithm is operating under assumption that an input image contains at least one piece-wise linear boundary between an object and background. In spite of above-mentioned limitations, the proposed algorithm can make acceptable quality of focused image by using only digital image processing.

  5. Multiangle Bistatic SAR Imaging and Fusion Based on BeiDou-2 Navigation Satellite System

    Directory of Open Access Journals (Sweden)

    Zeng Tao

    2015-01-01

    Full Text Available Bistatic Synthetic Aperture Radar (BSAR based on the Global Navigation Service System (GNSSBSAR uses navigation satellites as radar transmitters, which are low in cost. However, GNSS-BSAR images have poor resolution and low Signal-to-Noise Ratios (SNR. In this paper, a multiangle observation and data processing strategy are presented based on BeiDou-2 navigation satellite imagery, from which twenty-six BSAR images in different configurations are obtained. A region-based fusion algorithm using region of interest segmentation is proposed, and a high-quality fusion image is obtained. The results reveal that the multiangle imaging method can extend the applications of GNSS-BSAR.

  6. The addition of a sagittal image fusion improves the prostate cancer detection in a sensor-based MRI /ultrasound fusion guided targeted biopsy.

    Science.gov (United States)

    Günzel, Karsten; Cash, Hannes; Buckendahl, John; Königbauer, Maximilian; Asbach, Patrick; Haas, Matthias; Neymeyer, Jörg; Hinz, Stefan; Miller, Kurt; Kempkensteffen, Carsten

    2017-01-13

    To explore the diagnostic benefit of an additional image fusion of the sagittal plane in addition to the standard axial image fusion, using a sensor-based MRI/US fusion platform. During July 2013 and September 2015, 251 patients with at least one suspicious lesion on mpMRI (rated by PI-RADS) were included into the analysis. All patients underwent MRI/US targeted biopsy (TB) in combination with a 10 core systematic prostate biopsy (SB). All biopsies were performed on a sensor-based fusion system. Group A included 162 men who received TB by an axial MRI/US image fusion. Group B comprised 89 men in whom the TB was performed with an additional sagittal image fusion. The median age in group A was 67 years (IQR 61-72) and in group B 68 years (IQR 60-71). The median PSA level in group A was 8.10 ng/ml (IQR 6.05-14) and in group B 8.59 ng/ml (IQR 5.65-12.32). In group A the proportion of patients with a suspicious digital rectal examination (DRE) (14 vs. 29%, p = 0.007) and the proportion of primary biopsies (33 vs 46%, p = 0.046) were significantly lower. The rate of PI-RADS 3 lesions were overrepresented in group A compared to group B (19 vs. 9%; p = 0.044). Classified according to PI-RADS 3, 4 and 5, the detection rates of TB were 42, 48, 75% in group A and 25, 74, 90% in group B. The rate of PCa with a Gleason score ≥7 missed by TB was 33% (18 cases) in group A and 9% (5 cases) in group B; p-value 0.072. An explorative multivariate binary logistic regression analysis revealed that PI-RADS, a suspicious DRE and performing an additional sagittal image fusion were significant predictors for PCa detection in TB. 9 PCa were only detected by TB with sagittal fusion (sTB) and sTB identified 10 additional clinically significant PCa (Gleason ≥7). Performing an additional sagittal image fusion besides the standard axial fusion appears to improve the accuracy of the sensor-based MRI/US fusion platform.

  7. Geophysical data fusion for subsurface imaging. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-01

    This report contains the results of a three year, three-phase project whose long-range goal has been to create a means for the more detailed and accurate definition of the near-surface (0--300 ft) geology beneath a site that had been subjected to environmental pollution. The two major areas of research and development have been: improved geophysical field data acquisition techniques; and analytical tools for providing the total integration (fusion) of all site data. The long-range goal of this project has been to mathematically, integrate the geophysical data that could be derived from multiple sensors with site geologic information and any other type of available site data, to provide a detailed characterization of thin clay layers and geological discontinuities at hazardous waste sites.

  8. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  9. Operational life prediction on gating image intensifier

    Science.gov (United States)

    Dong, Yu-hui; Shen, Zhi-guo; Li, Zhong-li

    2009-07-01

    Operational life is one of the important parameters to evaluate second and super second generation image intensifiers. It can be used not only to monitor manufacturing technique in product line, then the technology on photocathode processing, MCP degassing and MCP producing can be adjusted promptly, but also to eliminate the image intensifiers which have hidden risk on operational life as early as possible. Recently gating image intensifiers are used widely, method to estimate the operational life of gating image intensifier related to its practical operate mode and working condition need to be established urgently. The least square method to analyze the operational life test data in product line was introduced in this paper. Now the data can be analyzed with convenient statistic analyze function on Excel. Using "worksheet function" and "chart wizard" and "data analysis" on Excel to do the least square method calculation, spreadsheets are established to do complex data calculation with worksheet functions. Based on them, formulas to monitor the technology parameters were derived, and the conclusion that the operational life was only related to the decrease slope of photocathode exponential fit curve was made. The decrease slope of photocathode sensitivity exponential fit curve and the decrease percent of the exponential fit photocathode sensitivity can be used to evaluate the qualification of the operational life rapidly. The mathematic models for operational life prediction on image intensifier and gating image intensifier are established respectively based on the acceptable values of the decrease percent of the exponential fit photocathode sensitivity and the expecting signal to noise ratio. The equations predicting the operational life related to duty cycle and input light level on gating image intensifier were derived, and the relationship between them were discussed too. The theory foundation were made herein, so the user can select proper gating image

  10. Fusion: ultra-high-speed and IR image sensors

    Science.gov (United States)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  11. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    Science.gov (United States)

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  12. High resolution isotopic analysis of U-bearing particles via fusion of SIMS and EDS images

    Energy Technology Data Exchange (ETDEWEB)

    Tarolli, Jay G.; Naes, Benjamin E.; Garcia, Benjamin J.; Fischer, Ashley E.; Willingham, David

    2016-01-01

    Image fusion of secondary ion mass spectrometry (SIMS) images and X-ray elemental maps from energy-dispersive spectroscopy (EDS) was performed to facilitate the isolation and re-analysis of isotopically unique U-bearing particles where the highest precision SIMS measurements are required. Image registration, image fusion and particle micromanipulation were performed on a subset of SIMS images obtained from a large area pre-screen of a particle distribution from a sample containing several certified reference materials (CRM) U129A, U015, U150, U500 and U850, as well as a standard reference material (SRM) 8704 (Buffalo River Sediment) to simulate particles collected on swipes during routine inspections of declared uranium enrichment facilities by the International Atomic Energy Agency (IAEA). In total, fourteen particles, ranging in size from 5 – 15 µm, were isolated and re-analyzed by SIMS in multi-collector mode identifying nine particles of CRM U129A, one of U150, one of U500 and three of U850. These identifications were made within a few percent errors from the National Institute of Standards and Technology (NIST) certified atom percent values for 234U, 235U and 238U for the corresponding CRMs. This work represents the first use of image fusion to enhance the accuracy and precision of isotope ratio measurements for isotopically unique U-bearing particles for nuclear safeguards applications. Implementation of image fusion is essential for the identification of particles of interests that fall below the spatial resolution of the SIMS images.

  13. CT-MR image data fusion for computer assisted navigated neurosurgery of temporal bone tumors

    Energy Technology Data Exchange (ETDEWEB)

    Nemec, Stefan Franz [Department of Radiology/Osteology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria)]. E-mail: stefan.nemec@meduniwien.ac.at; Donat, Markus Alexander [Department of Neurosurgery, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Mehrain, Sheida [Department of Radiology/Osteology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Friedrich, Klaus [Department of Radiology/Osteology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Krestan, Christian [Department of Radiology/Osteology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Matula, Christian [Department of Neurosurgery, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Imhof, Herwig [Department of Radiology/Osteology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria); Czerny, Christian [Department of Radiology/Osteology, Medical University Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria)

    2007-05-15

    Purpose: To demonstrate the value of multi detector computed tomography (MDCT) and magnetic resonance imaging (MRI) in the preoperative work up of temporal bone tumors and to present, especially, CT and MR image fusion for surgical planning and performance in computer assisted navigated neurosurgery of temporal bone tumors. Materials and methods: Fifteen patients with temporal bone tumors underwent MDCT and MRI. MDCT was performed in high-resolution bone window level setting in axial plane. The reconstructed MDCT slice thickness was 0.8 mm. MRI was performed in axial and coronal plane with T2-weighted fast spin-echo (FSE) sequences, un-enhanced and contrast-enhanced T1-weighted spin-echo (SE) sequences, and coronal T1-weighted SE sequences with fat suppression and with 3D T1-weighted gradient-echo (GE) contrast-enhanced sequences in axial plane. The 3D T1-weighted GE sequence had a slice thickness of 1 mm. Image data sets of CT and 3D T1-weighted GE sequences were merged utilizing a workstation to create CT-MR fusion images. MDCT and MR images were separately used to depict and characterize lesions. The fusion images were utilized for interventional planning and intraoperative image guidance. The intraoperative accuracy of the navigation unit was measured, defined as the deviation between the same landmark in the navigation image and the patient. Results: Tumorous lesions of bone and soft tissue were well delineated and characterized by CT and MR images. The images played a crucial role in the differentiation of benign and malignant pathologies, which consisted of 13 benign and 2 malignant tumors. The CT-MR fusion images supported the surgeon in preoperative planning and improved surgical performance. The mean intraoperative accuracy of the navigation system was 1.25 mm. Conclusion: CT and MRI are essential in the preoperative work up of temporal bone tumors. CT-MR image data fusion presents an accurate tool for planning the correct surgical procedure and is a

  14. A PRELIMINARY STUDY ON COMPARISON AND FUSION OF METABOLIC IMAGES OF PET WITH ANATOMIC IMAGES OF CT AND MRI

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Objective. To compare and match metabolic images of PET with anatomic images of CT and MRI. Methods. The CT or MRI images of the patients were obtained through a photo scanner, and then transferred to the remote workstation of PET scanner with a floppy disk. A fusion method was developed to match the 2-dimensional CT or MRI slices with the correlative slices of 3-dimensional volume PET images. Results. Twenty- nine metabolically changed foci were accurately localized in 21 epilepsy patients' MRI images, while MRI alone had only 6 true positive findings. In 53 cancer or suspicious cancer patients, 53 positive lesions detected by PET were compared and matched with the corresponding lesions in CT or MRI images, in which 10 lesions were missed. On the other hand, 23 lesions detected from the patients' CT or MRI images were negative or with low uptake in the PET images, and they were finally proved as benign. Conclusions. Comparing and matching metabolic images with anatomic images helped obtain a full understanding about the lesion and its peripheral structures. The fusion method was simple, practical and useful for localizing metabolically changed lesions.

  15. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    Science.gov (United States)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  16. Hyperspectral Image Classification Based on the Weighted Probabilistic Fusion of Multiple Spectral-spatial Features

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-08-01

    Full Text Available A hyperspectral images classification method based on the weighted probabilistic fusion of multiple spectral-spatial features was proposed in this paper. First, the minimum noise fraction (MNF approach was employed to reduce the dimension of hyperspectral image and extract the spectral feature from the image, then combined the spectral feature with the texture feature extracted based on gray level co-occurrence matrix (GLCM, the multi-scale morphological feature extracted based on OFC operator and the end member feature extracted based on sequential maximum angle convex cone (SMACC method to form three spectral-spatial features. Afterwards, support vector machine (SVM classifier was used for the classification of each spectral-spatial feature separately. Finally, we established the weighted probabilistic fusion model and applied the model to fuse the SVM outputs for the final classification result. In order to verify the proposed method, the ROSIS and AVIRIS image were used in our experiment and the overall accuracy reached 97.65% and 96.62% separately. The results indicate that the proposed method can not only overcome the limitations of traditional single-feature based hyperspectral image classification, but also be superior to conventional VS-SVM method and probabilistic fusion method. The classification accuracy of hyperspectral images was improved effectively.

  17. Multi-focus Image Fusion Algorithms%多聚焦图像融合算法

    Institute of Scientific and Technical Information of China (English)

    张攀

    2012-01-01

    Multi-focus image fusion is to combine information from two or multiple images of the same scene but different focus points for producing a merged image, which makes fused images more clear. The representative algorithms of multi-focus image fusion are swarm intelligence algorithm fusion methods, which achieve good effect, such as genetic algorithm (GA), particle swarm optimization (PSO) and so on. Currently, the optimization of swarm intelligence algorithms to improve and accelerate the integration of image speed is a major research direction.%多聚焦图像融合,是将两幅(或多幅)对同一场景的各个目标,聚焦不同的图像融合成一幅清晰的新图像.在多聚焦图像融合中,典型的群智能算法图像融合方法取得了较好的效果,如遗传算法、粒子群算法等.目前,对群智能算法的优化改进,加快图像的融合速度是一个主要的研究方向.

  18. Examplers based image fusion features for face recognition

    CERN Document Server

    James, Alex Pappachen

    2012-01-01

    Examplers of a face are formed from multiple gallery images of a person and are used in the process of classification of a test image. We incorporate such examplers in forming a biologically inspired local binary decisions on similarity based face recognition method. As opposed to single model approaches such as face averages the exampler based approach results in higher recognition accu- racies and stability. Using multiple training samples per person, the method shows the following recognition accuracies: 99.0% on AR, 99.5% on FERET, 99.5% on ORL, 99.3% on EYALE, 100.0% on YALE and 100.0% on CALTECH face databases. In addition to face recognition, the method also detects the natural variability in the face images which can find application in automatic tagging of face images.

  19. A data-distributed parallel algorithm for wavelet-based fusion of remote sensing images

    Institute of Scientific and Technical Information of China (English)

    YANG Xuejun; WANG Panfeng; DU Yunfei; ZHOU Haifang

    2007-01-01

    With the increasing importance of multiplatform remote sensing missions,the fast integration or fusion of digital images from disparate sources has become critical to the success of these endeavors.In this paper,to speed up the fusion process,a Data-distributed Parallel Algorithm for wavelet-based Fusion (DPAF for short) of remote sensing images which are not geo-registered remote sensing images is presented for the first time.To overcome the limitations on memory space as well as the computing capability of a single processor,data distribution,data-parallel processing and load balancing techniques are integrated into DPAF.To avoid the inherent communication overhead of a wavelet-based fusion method,a special design called redundant partitioning is used,which is inspired by the characteristics of wavelet transform.Finally,DPAF is evaluated in theory and tested on a 32-CPU cluster of workstations.The experimental results show that our algorithm has good parallel performance and scalability.

  20. Multispectral image feature fusion for detecting land mines

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Fields, D.J.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-11-15

    Our system fuses information contained in registered images from multiple sensors to reduce the effect of clutter and improve the the ability to detect surface and buried land mines. The sensor suite currently consists if a camera that acquires images in sixible wavelength bands, du, dual-band infrared (5 micron and 10 micron) and ground penetrating radar. Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a variety of physical properties that are more separate in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, holes made by animals and natural processes, etc.) and some artifacts.

  1. Pro duct Image Classification Based on Fusion Features

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-hui; LIU Jing-jing; YANG Li-jun

    2015-01-01

    Two key challenges raised by a product images classification system are classi-fication precision and classification time. In some categories, classification precision of the latest techniques, in the product images classification system, is still low. In this paper, we propose a local texture descriptor termed fan refined local binary pattern, which captures more detailed information by integrating the spatial distribution into the local binary pattern feature. We compare our approach with different methods on a subset of product images on Amazon/eBay and parts of PI100 and experimental results have demonstrated that our proposed approach is superior to the current existing methods. The highest classification precision is increased by 21%and the average classification time is reduced by 2/3.

  2. Infrared and Microwave Image Fusion for Rainfall Detection over Northern Algeria

    Directory of Open Access Journals (Sweden)

    Fethi Ouallouche

    2014-05-01

    Full Text Available Rain areas delineation proposed in this paper is based on the image fusion from geostationary Meteosat Second Generation (MSG satellite, with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM satellite. The fusion technique described in this work used an artificial neural network (ANN. It's has been developed to detect instantaneous rainfall by using information from the IR images of MSG satellite and from TRMM Microwave Imager (TMI. The study is carried out over north of Algeria. Seven spectral parameters are used as input data of ANN to identify raining or non - raining pixels. Corresponding data of raining /non-raining pixels are taken from a PR (precipitation radar issued from TRMM. Results from the developed scheme are compared with the results of SI method (Scattering Index taken as reference method. The results show that the developed model performs very well and overcomes the deficiencies of use a single satellite.

  3. a Two-Step Decision Fusion Strategy: Application to Hyperspectral and Multispectral Images for Urban Classification

    Science.gov (United States)

    Ouerghemmi, W.; Le Bris, A.; Chehata, N.; Mallet, C.

    2017-05-01

    Very high spatial resolution multispectral images and lower spatial resolution hyperspectral images are complementary sources for urban object classification. The first enables a fine delineation of objects, while the second can better discriminate classes and consider richer land cover semantics. This paper presents a decision fusion scheme taking advantage of both sources classification maps, to produce a better classification map. The proposed method aims at dealing with both semantic and spatial uncertainties and consists in two steps. First, class membership maps are merged at pixel level. Several fusion rules are considered and compared in this study. Secondly, classification is obtained from a global regularization of a graphical model, involving a fit-to-data term related to class membership measures and an image based contrast sensitive regularization term. Results are presented on three datasets. The classification accuracy is improved up to 5 %, with comparison to the best single source classification accuracy.

  4. Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation

    Science.gov (United States)

    Pelapur, Rengarajan; Prasath, V. B. Surya; Bunyak, Filiz; Glinskii, Olga V.; Glinsky, Vladislav V.; Huxley, Virginia H.; Palaniappan, Kannappan

    2015-01-01

    Automatic segmentation of three-dimensional microvascular structures is needed for quantifying morphological changes to blood vessels during development, disease and treatment processes. Single focus two-dimensional epifluorescent imagery lead to unsatisfactory segmentations due to multiple out of focus vessel regions that have blurred edge structures and lack of detail. Additional segmentation challenges include varying contrast levels due to diffusivity of the lectin stain, leakage out of vessels and fine morphological vessel structure. We propose an approach for vessel segmentation that combines multi-focus image fusion with robust adaptive filtering. The robust adaptive filtering scheme handles noise without destroying small structures, while multi-focus image fusion considerably improves segmentation quality by deblurring out-of-focus regions through incorporating 3D structure information from multiple focus steps. Experiments using epifluorescence images of mice dura mater show an average of 30.4% improvement compared to single focus microvasculature segmentation. PMID:25571050

  5. 3-D MRI/CT fusion imaging of the lumbar spine

    Energy Technology Data Exchange (ETDEWEB)

    Yamanaka, Yuki; Kamogawa, Junji; Misaki, Hiroshi; Kamada, Kazuo; Okuda, Shunsuke; Morino, Tadao; Ogata, Tadanori; Yamamoto, Haruyasu [Ehime University, Department of Bone and Joint Surgery, Toon-shi, Ehime (Japan); Katagi, Ryosuke; Kodama, Kazuaki [Katagi Neurological Surgery, Imabari-shi, Ehime (Japan)

    2010-03-15

    The objective was to demonstrate the feasibility of MRI/CT fusion in demonstrating lumbar nerve root compromise. We combined 3-dimensional (3-D) computed tomography (CT) imaging of bone with 3-D magnetic resonance imaging (MRI) of neural architecture (cauda equina and nerve roots) for two patients using VirtualPlace software. Although the pathological condition of nerve roots could not be assessed using MRI, myelography or CT myelography, 3-D MRI/CT fusion imaging enabled unambiguous, 3-D confirmation of the pathological state and courses of nerve roots, both inside and outside the foraminal arch, as well as thickening of the ligamentum flavum and the locations, forms and numbers of dorsal root ganglia. Positional relationships between intervertebral discs or bony spurs and nerve roots could also be depicted. Use of 3-D MRI/CT fusion imaging for the lumbar vertebral region successfully revealed the relationship between bone construction (bones, intervertebral joints, and intervertebral disks) and neural architecture (cauda equina and nerve roots) on a single film, three-dimensionally and in color. Such images may be useful in elucidating complex neurological conditions such as degenerative lumbar scoliosis(DLS), as well as in diagnosis and the planning of minimally invasive surgery. (orig.)

  6. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  7. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Angel D. Sappa

    2016-06-01

    Full Text Available This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR and Long Wave InfraRed (LWIR.

  8. Image Fusion Based on the Self-Organizing Feature Map Neural Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhaoli; SUN Shenghe

    2001-01-01

    This paper presents a new image datafusion scheme based on the self-organizing featuremap (SOFM) neural networks.The scheme consists ofthree steps:(1) pre-processing of the images,whereweighted median filtering removes part of the noisecomponents corrupting the image,(2) pixel clusteringfor each image using two-dimensional self-organizingfeature map neural networks,and (3) fusion of the im-ages obtained in Step (2) utilizing fuzzy logic,whichsuppresses the residual noise components and thusfurther improves the image quality.It proves thatsuch a three-step combination offers an impressive ef-fectiveness and performance improvement,which isconfirmed by simulations involving three image sen-sors (each of which has a different noise structure).

  9. An efficient registration and fusion algorithm for large misalignment remote sensing images

    Science.gov (United States)

    Li, Lingling; Li, Cuihua; Zeng, Xiaoming; Li, Bao

    2007-11-01

    In this paper, an efficient technique to perform automatic registration and fusion for large misalignment remote sensing images is proposed. It complements SIFT features with Harris-affine features, and uses the ratio of the first and second nearest neighbor distance to setup the initial correspondences, then uses the affine invariant of Mahalanobis distance to remove the mismatched feature points. From this correspondence of the points, the affine matrix between two different images can be determined. All points in the sensed image are mapped to the reference using the estimated transformation matrix and the corresponding gray levels are assigned by re-sampling the image in the sensed image. Finally, we develop Burt's match and saliency metric and use neighborhood space frequency to fuse the registrated reference and sensed remote sensing images in NSCT domain. Experiments on remote sensing images with large misalignment demonstrate the superb performance of the algorithm.

  10. Multimodal Medical Image Fusion Framework Based on Simplified PCNN in Nonsubsampled Contourlet Transform Domain

    Directory of Open Access Journals (Sweden)

    Nianyi Wang

    2013-06-01

    Full Text Available In this paper, we present a new medical image fusion algorithm based on nonsubsampled contourlet transform (NSCT and spiking cortical model (SCM. The flexible multi-resolution, anisotropy, and directional expansion characteristics of NSCT are associated with global coupling and pulse synchronization features of SCM. Considering the human visual system characteristics, two different fusion rules are used to fuse the low and high frequency sub-bands respectively. Firstly, maximum selection rule (MSR is used to fuse low frequency coefficients. Secondly, spatial frequency (SF is applied to motivate SCM network rather than using coefficients value directly, and then the time matrix of SCM is set as criteria to select coefficients of high frequency subband. The effectiveness of the proposed algorithm is achieved by the comparison with existing fusion methods.

  11. A spectral-spatial fusion model for robust blood pulse waveform extraction in photoplethysmographic imaging

    CERN Document Server

    Amelard, Robert; Wong, Alexander

    2016-01-01

    Photoplethysmographic imaging is a camera-based solution for non-contact cardiovascular monitoring from a distance. This technology enables monitoring in situations where contact-based devices may be problematic or infeasible, such as ambulatory, sleep, and multi-individual monitoring. However, extracting the blood pulse waveform signal is challenging due to the unknown mixture of relevant (pulsatile) and irrelevant pixels in the scene. Here, we design and implement a signal fusion framework, FusionPPG, for extracting a blood pulse waveform signal with strong temporal fidelity from a scene without requiring anatomical priors (e.g., facial tracking). The extraction problem is posed as a Bayesian least squares fusion problem, and solved using a novel probabilistic pulsatility model that incorporates both physiologically derived spectral and spatial waveform priors to identify pulsatility characteristics in the scene. Experimental results show statistically significantly improvements compared to the FaceMeanPPG ...

  12. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    Science.gov (United States)

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  13. A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.

    Directory of Open Access Journals (Sweden)

    Lu Guo

    Full Text Available To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors.A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT and tri-modality (MRI/CT/PET image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV, the average distance between surface and centroid (ADSC, and the local standard deviation (SDlocal. Analysis of COV was also performed to evaluate intra-observer volume variation.The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09 and 0.07(± 0.01 for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05 with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm and patient 3 (from 0.42 cm to 0.36 cm with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00 with the tri-modality method as compared with using the dual-modality method.With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.

  14. A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.

    Science.gov (United States)

    Guo, Lu; Shen, Shuming; Harris, Eleanor; Wang, Zheng; Jiang, Wei; Guo, Yu; Feng, Yuanming

    2014-01-01

    To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors. A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV) delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT) and tri-modality (MRI/CT/PET) image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV), the average distance between surface and centroid (ADSC), and the local standard deviation (SDlocal). Analysis of COV was also performed to evaluate intra-observer volume variation. The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09) and 0.07(± 0.01) for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (ptri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm) and patient 3 (from 0.42 cm to 0.36 cm) with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00) with the tri-modality method as compared with using the dual-modality method. With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.

  15. Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents

    Science.gov (United States)

    Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam

    2017-01-01

    The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797

  16. Evaluation of electrode position in deep brain stimulation by image fusion (MRI and CT)

    Energy Technology Data Exchange (ETDEWEB)

    Barnaure, I.; Lovblad, K.O.; Vargas, M.I. [Geneva University Hospital, Department of Neuroradiology, Geneva 14 (Switzerland); Pollak, P.; Horvath, J.; Boex, C.; Burkhard, P. [Geneva University Hospital, Department of Neurology, Geneva (Switzerland); Momjian, S. [Geneva University Hospital, Department of Neurosurgery, Geneva (Switzerland); Remuinan, J. [Geneva University Hospital, Department of Radiology, Geneva (Switzerland)

    2015-09-15

    Imaging has an essential role in the evaluation of correct positioning of electrodes implanted for deep brain stimulation (DBS). Although MRI offers superior anatomic visualization of target sites, there are safety concerns in patients with implanted material; imaging guidelines are inconsistent and vary. The fusion of postoperative CT with preoperative MRI images can be an alternative for the assessment of electrode positioning. The purpose of this study was to assess the accuracy of measurements realized on fused images (acquired without a stereotactic frame) using a manufacturer-provided software. Data from 23 Parkinson's disease patients who underwent bilateral electrode placement for subthalamic nucleus (STN) DBS were acquired. Preoperative high-resolution T2-weighted sequences at 3 T, and postoperative CT series were fused using a commercially available software. Electrode tip position was measured on the obtained images in three directions (in relation to the midline, the AC-PC line and an AC-PC line orthogonal, respectively) and assessed in relation to measures realized on postoperative 3D T1 images acquired at 1.5 T. Mean differences between measures carried out on fused images and on postoperative MRI lay between 0.17 and 0.97 mm. Fusion of CT and MRI images provides a safe and fast technique for postoperative assessment of electrode position in DBS. (orig.)

  17. A New Fusion Technique of Remote Sensing Images for Land Use/Cover

    Institute of Scientific and Technical Information of China (English)

    WU Lian-Xi; SUN Bo; ZHOU Sheng-Lu; HUANG Shu-E; ZHAO Qi-Guo

    2004-01-01

    In China,accelerating industrialization and urbanization following high-speed economic development and population increases have greatly impacted land use/cover changes,making it imperative to obtain accurate and up to date information on changes so as to evaluate their environmental effects. The major purpose of this study was to develop a new method to fuse lower spatial resolution multispectral satellite images with higher spatial resolution panchromatic ones to assist in land use/cover mapping. An algorithm of a new fusion method known as edge enhancement intensity modulation (EEIM) was proposed to merge two optical image data sets of different spectral ranges. The results showed that the EEIM image was quite similar in color to lower resolution multispectral images,and the fused product was better able to preserve spectral information. Thus,compared to conventional approaches,the spectral distortion of the fused images was markedly reduced. Therefore,the EEIM fusion method could be utilized to fuse remote sensing data from the same or different sensors,including TM images and SPOT5 panchromatic images,providing high quality land use/cover images.

  18. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    Science.gov (United States)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  19. Multiple color-image fusion and watermarking based on optical interference and wavelet transform

    Science.gov (United States)

    Abuturab, Muhammad Rafiq

    2017-02-01

    A novel multiple color-image fusion and watermarking using optical interference and wavelet transform is proposed. In this method, each secret color image is encoded into three phase-only masks (POMs). One POM is constructed as user identity key and the other two POMs are generated as user identity key modulated by corresponding secret color image in gyrator transform domain without using any time-consuming iterative computations or post-processing of the POMs to remove inherent silhouette problem. The R, G, and B channels of different user identity keys POM are then individually multiplied to get three multiplex POMs, which are exploited as encrypted images. Similarly the R, G, and B channels of other two POMs are independently multiplied to obtain two sets of three multiplex POMs. The encrypted images are fused with gray-level cover image to produce the final encrypted image as watermarked image. The secret color images are shielded by encrypted images (which have no information about secret images) as well as cover image (which reveals no information about encrypted images). These two remarkable features of the proposed system drastically reduce the probability of the encrypted images to be searched and attacked. Each individual user has an identity key and two phase-only keys as three decryption keys besides transformation angles regarded as additional keys. Theoretical analysis and numerical simulation results validate the feasibility of the proposed method.

  20. Prediction of olive oil sensory descriptors using instrumental data fusion and partial least squares (PLS) regression.

    Science.gov (United States)

    Borràs, Eva; Ferré, Joan; Boqué, Ricard; Mestres, Montserrat; Aceña, Laura; Calvo, Angels; Busto, Olga

    2016-08-01

    Headspace-Mass Spectrometry (HS-MS), Fourier Transform Mid-Infrared spectroscopy (FT-MIR) and UV-Visible spectrophotometry (UV-vis) instrumental responses have been combined to predict virgin olive oil sensory descriptors. 343 olive oil samples analyzed during four consecutive harvests (2010-2014) were used to build multivariate calibration models using partial least squares (PLS) regression. The reference values of the sensory attributes were provided by expert assessors from an official taste panel. The instrumental data were modeled individually and also using data fusion approaches. The use of fused data with both low- and mid-level of abstraction improved PLS predictions for all the olive oil descriptors. The best PLS models were obtained for two positive attributes (fruity and bitter) and two defective descriptors (fusty and musty), all of them using data fusion of MS and MIR spectral fingerprints. Although good predictions were not obtained for some sensory descriptors, the results are encouraging, specially considering that the legal categorization of virgin olive oils only requires the determination of fruity and defective descriptors.

  1. Numerical models for the prediction of failure for multilayer fusion Al-alloy sheets

    Energy Technology Data Exchange (ETDEWEB)

    Gorji, Maysam; Berisha, Bekim; Hora, Pavel [ETH Zurich, Institute of Virtual Manufacturing, Zurich (Switzerland); Timm, Jürgen [Novelis Switzerland SA, 3960 Sierre (Switzerland)

    2013-12-16

    Initiation and propagation of cracks in monolithic and multi-layer aluminum alloys, called “Fusion”, is investigated. 2D plane strain finite element simulations are performed to model deformation due to bending and to predict failure. For this purpose, fracture strains are measured based on microscopic pictures of Nakajima specimens. In addition to, micro-structure of materials is taken into account by introducing a random grain distribution over the sheet thickness as well as a random distribution of the measured yield curve. It is shown that the performed experiments and the introduced FE-Model are appropriate methods to highlight the advantages of the Fusion material, especially for bending processes.

  2. Calculated Lattice Energies of Energetic Materials in a Prediction of their Heats of Fusion and Sublimation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The paper specifies an unambiguous basic relationship between the published results of ab initio calculations of lattice energies,EL,and heats of sublimation,ΔHs,of individual energetic materials. In this relationship,the ΔHs value has been replaced by heats of fusion,ΔHm,tr. Thereby its unambiguity has been lost,and the similarity of details of molecular structure begins to be of decisive importance. The resulting partial relationships,together with the basic relationship,have been used for prediction of ΔHs,and ΔHm,tr values of technically attractive polynitro compounds.

  3. Ultrasound and fluoroscopic images fusion by autonomous ultrasound probe detection.

    Science.gov (United States)

    Mountney, Peter; Ionasec, Razvan; Kaizer, Markus; Mamaghani, Sina; Wu, Wen; Chen, Terrence; John, Matthias; Boese, Jan; Comaniciu, Dorin

    2012-01-01

    New minimal-invasive interventions such as transcatheter valve procedures exploit multiple imaging modalities to guide tools (fluoroscopy) and visualize soft tissue (transesophageal echocardiography (TEE)). Currently, these complementary modalities are visualized in separate coordinate systems and on separate monitors creating a challenging clinical workflow. This paper proposes a novel framework for fusing TEE and fluoroscopy by detecting the pose of the TEE probe in the fluoroscopic image. Probe pose detection is challenging in fluoroscopy and conventional computer vision techniques are not well suited. Current research requires manual initialization or the addition of fiducials. The main contribution of this paper is autonomous six DoF pose detection by combining discriminative learning techniques with a fast binary template library. The pose estimation problem is reformulated to incrementally detect pose parameters by exploiting natural invariances in the image. The theoretical contribution of this paper is validated on synthetic, phantom and in vivo data. The practical application of this technique is supported by accurate results (< 5 mm in-plane error) and computation time of 0.5s.

  4. Adaptive Super-Spatial Prediction Approach For Lossless Image Compression

    Directory of Open Access Journals (Sweden)

    Arpita C. Raut,

    2014-04-01

    Full Text Available Existing prediction based lossless image compression schemes perform prediction of an image data using their spatial neighborhood technique which can’t predict high-frequency image structure components, such as edges, patterns, and textures very well which will limit the image compression efficiency. To exploit these structure components, adaptive super-spatial prediction approach is developed. The super-spatial prediction approach is adaptive to compress high frequency structure components from the grayscale image. The motivation behind the proposed prediction approach is taken from motion prediction in video coding, which attempts to find an optimal prediction of structure components within the previously encoded image regions. This prediction approach is efficient for image regions with significant structure components with respect to parameters as compression ratio, bit rate as compared to CALIC (Context-based adaptive lossless image coding.

  5. Application of Preoperative CT/MRI Image Fusion in Target Positioning for Deep Brain Stimulation

    Institute of Scientific and Technical Information of China (English)

    Yu Wang; Zi-yuan Liu; Wan-chen Dou; Wen-bin Ma; Ren-zhi Wang; Yi Guo

    2016-01-01

    Objective To explore the efficacy of target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation. Methods We retrospectively analyzed the clinical data and images of 79 cases (68 with Parkinson’s disease, 11 with dystonia) who received preoperative CT/MRI image fusion in target positioning of subthalamic nucleus in deep brain stimulation. Deviation of implanted electrodes from the target nucleus of each patient were measured. Neurological evaluations of each patient before and after the treatment were performed and compared. Complications of the positioning and treatment were recorded. Results The mean deviations of the electrodes implanted on X, Y, and Z axis were 0.5 mm, 0.6 mm, and 0.6 mm, respectively. Postoperative neurologic evaluations scores of unified Parkinson’s disease rating scale (UPDRS) for Parkinson’s disease and Burke-Fahn-Marsden Dystonia Rating Scale (BFMDRS) for dystonia patients improved significantly compared to the preoperative scores (P<0.001); Complications occurred in 10.1% (8/79) patients, and main side effects were dysarthria and diplopia. Conclusion Target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation has high accuracy and good clinical outcomes.

  6. Quantitative Characterization of Inertial Confinement Fusion Capsules Using Phase Contrast Enhanced X-Ray Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kozioziemski, B J; Koch, J A; Barty, A; Martz, H E; Lee, W; Fezzaa, K

    2004-05-07

    Current designs for inertial confinement fusion capsules for the National Ignition Facility (NIF) consist of a solid deuterium-tritium (D-T) fuel layer inside of a copper doped beryllium capsule. Phase contrast enhanced x-ray imaging is shown to render the D-T layer visible inside the Be(Cu) capsule. Phase contrast imaging is experimentally demonstrated for several surrogate capsules and validates computational models. Polyimide and low density divinyl benzene foam capsules were imaged at the Advanced Photon Source synchrotron. The surrogates demonstrate that phase contrast enhanced imaging provides a method to characterize surfaces when absorption imaging cannot be used. Our computational models demonstrate that a rough surface can be accurately reproduced in phase contrast enhanced x-ray images.

  7. Fast Image Retrieval of Textile Industrial Accessory Based on Multi-Feature Fusion

    Institute of Scientific and Technical Information of China (English)

    沈文忠; 杨杰

    2004-01-01

    A hierarchical retrieval scheme of the accessory image database is proposed based on textile industrial accessory contour feature and region feature. At first smallest enclosed rectangle[1] feature (degree of accessory coordination) is used to filter the image database to decouple the image search scope. After the accessory contour information and region information are extracted, the fusion multi-feature of the centroid distance Fourier descriptor and distance distribution histogram is adopted to finish image retrieval accurately. All the features above are invariable under translation, scaling and rotation. Results from the test on the image database including 1,000 accessory images demonstrate that the method is effective and practical with high accuracy and fast speed.

  8. A DR-WFOI fusion system for the real-time molecular imaging in vivo

    Institute of Scientific and Technical Information of China (English)

    Kun Bi; Xiaochun Xu; Lei Xi; Shaoqun Zeng; Qingming Luo

    2008-01-01

    Digital radiography (DR) and whole-body fluorescent optical imaging (WFOI) have been widely applied in the field of molecular imaging, with the advantages in tissues and functional imaging. The integration of them contributes to the development and discovery of medicine. We introduce an equipment, performance of which is better than that of another molecular imaging system manufactured by Kodak Corp. It can take real-time small animal imaging in vivo, with lower cost and shorter development cycle on the LabVIEW platform. At last, a paradigm experiment on a nude mouse with green fluorescent protein (GFP) transgenic tumor is given to present a real-time DR-WFOI fusion simultaneous image.

  9. Automatic Fusion of Hyperspectral Images and Laser Scans Using Feature Points

    Directory of Open Access Journals (Sweden)

    Xiao Zhang

    2015-01-01

    Full Text Available Automatic fusion of different kinds of image datasets is so intractable with diverse imaging principle. This paper presents a novel method for automatic fusion of two different images: 2D hyperspectral images acquired with a hyperspectral camera and 3D laser scans obtained with a laser scanner, without any other sensor. Only a few corresponding feature points are used, which are automatically extracted from a scene viewed by the two sensors. Extraction method of feature points relies on SURF algorithm and camera model, which can convert a 3D laser scan into a 2D laser image with the intensity of the pixels defined by the attributes in the laser scan. Moreover, Collinearity Equation and Direct Linear Transformation are used to create the initial corresponding relationship of the two images. Adjustment is also used to create corrected values to eliminate errors. The experimental result shows that this method is successfully validated with images collected by a hyperspectral camera and a laser scanner.

  10. Remote sensing image classification based on block feature point density analysis and multiple-feature fusion

    Science.gov (United States)

    Li, Shijin; Jiang, Yaping; Zhang, Yang; Feng, Jun

    2015-10-01

    With the development of remote sensing (RS) and the related technologies, the resolution of RS images is enhancing. Compared with moderate or low resolution images, high-resolution ones can provide more detailed ground information. However, a variety of terrain has complex spatial distribution. The different objectives of high-resolution images have a variety of features. The effectiveness of these features is not the same, but some of them are complementary. Considering the above information and characteristics, a new method is proposed to classify RS images based on hierarchical fusion of multi-features. Firstly, RS images are pre-classified into two categories in terms of whether feature points are uniformly or non-uniformly distributed. Then, the color histogram and Gabor texture feature are extracted from the uniformly-distributed categories, and the linear spatial pyramid matching using sparse coding (ScSPM) feature is obtained from the non-uniformly-distributed categories. Finally, the classification is performed by two support vector machine classifiers. The experimental results on a large RS image database with 2100 images show that the overall classification accuracy is boosted by 10.1% in comparison with the highest accuracy of single feature classification method. Compared with other multiple-feature fusion methods, the proposed method has achieved the highest classification accuracy on this dataset which has reached 90.1%, and the time complexity of the algorithm is also greatly reduced.

  11. Documenting the location of prostate biopsies with image fusion

    Science.gov (United States)

    Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Bernardo, Marcelino; Merino, Maria J.; Wood, Bradford J.; Choyke, Peter L.; Pinto, Peter A.

    2012-01-01

    OBJECTIVE To develop a system that documents the location of transrectal ultrasonography (TRUS)-guided prostate biopsies by fusing them to MRI scans obtained prior to biopsy, as the actual location of prostate biopsies is rarely known. PATIENTS AND METHODS Fifty patients (median age 61) with a median prostate-specific antigen (PSA) of 5.8 ng/ml underwent 3T endorectal coil MRI prior to biopsy. 3D TRUS images were obtained just prior to standard TRUS-guided 12-core sextant biopsies wherein an electromagnetic positioning device was attached to the needle guide and TRUS probe in order to track the position of each needle pass. The 3D-TRUS image documenting the location of each biopsy was fused electronically to the T2-weighted MRI. Each biopsy needle track was marked on the TRUS images and these were then transposed onto the MRI. Each biopsy site was classified pathologically as positive or negative for cancer and the Gleason score was determined. RESULTS The location of all (n = 605) needle biopsy tracks was successfully documented on the T2-weighted (T2W) MRI. Among 50 patients, 20 had 56 positive cores. At the sites of biopsy, T2W signal was considered ‘positive’ for cancer (i.e. low in signal intensity) in 34 of 56 sites. CONCLUSION It is feasible to document the location of TRUS-guided prostate biopsies on pre-procedure MRI by fusing the pre-procedure TRUS to an endorectal coil MRI using electromagnetic needle tracking. This procedure may be useful in documenting the location of prior biopsies, improving quality control and thereby avoiding under-sampling of the prostate as well as directing subsequent biopsies to regions of the prostate not previously sampled. PMID:20590543

  12. Geometric calibration of multi-sensor image fusion system with thermal infrared and low-light camera

    Science.gov (United States)

    Peric, Dragana; Lukic, Vojislav; Spanovic, Milana; Sekulic, Radmila; Kocic, Jelena

    2014-10-01

    A calibration platform for geometric calibration of multi-sensor image fusion system is presented in this paper. The accurate geometric calibration of the extrinsic geometric parameters of cameras that uses planar calibration pattern is applied. For calibration procedure specific software is made. Patterns used in geometric calibration are prepared with aim to obtain maximum contrast in both visible and infrared spectral range - using chessboards which fields are made of different emissivity materials. Experiments were held in both indoor and outdoor scenarios. Important results of geometric calibration for multi-sensor image fusion system are extrinsic parameters in form of homography matrices used for homography transformation of the object plane to the image plane. For each camera a corresponding homography matrix is calculated. These matrices can be used for image registration of images from thermal and low light camera. We implemented such image registration algorithm to confirm accuracy of geometric calibration procedure in multi-sensor image fusion system. Results are given for selected patterns - chessboard with fields made of different emissivity materials. For the final image registration algorithm in surveillance system for object tracking we have chosen multi-resolution image registration algorithm which naturally combines with a pyramidal fusion scheme. The image pyramids which are generated at each time step of image registration algorithm may be reused at the fusion stage so that overall number of calculations that must be performed is greatly reduced.

  13. A novel image fusion algorithm based on 2D scale-mixing complex wavelet transform and Bayesian MAP estimation for multimodal medical images

    Directory of Open Access Journals (Sweden)

    Abdallah Bengueddoudj

    2017-05-01

    Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.

  14. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    G. R. Odette; G. E. Lucas

    2005-11-15

    This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

  15. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    Science.gov (United States)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  16. Soft sensor design by multivariate fusion of image features and process measurements

    DEFF Research Database (Denmark)

    Lin, Bao; Jørgensen, Sten Bay

    2011-01-01

    This paper presents a multivariate data fusion procedure for design of dynamic soft sensors where suitably selected image features are combined with traditional process measurements to enhance the performance of data-driven soft sensors. A key issue of fusing multiple sensor data, i.e. to determine...... oxides (NOx) emission of cement kilns. On-site tests demonstrate improved performance over soft sensors based on conventional process measurements only....

  17. Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility

    Indian Academy of Sciences (India)

    ASHISH V VANMALI; VIKRAM M GADRE

    2017-07-01

    Image visibility is affected by the presence of haze, fog, smoke, aerosol, etc. Image dehazing using either single visible image or visible and near-infrared (NIR) image pair is often considered as a solution to improve the visual quality of such scenes. In this paper, we address this problem from a visible–NIR image fusion perspective, instead of the conventional haze imaging model. The proposed algorithm uses a Laplacian–Gaussian pyramid based multi-resolution fusion process, guided by weight maps generated using local entropy,local contrast and visibility as metrics that control the fusion result. The proposed algorithm is free from any human intervention, and produces results that outperform the existing image-dehazing algorithms both visually as well as quantitatively. The algorithm proves to be efficient not only for the outdoor scenes with or without haze, but also for the indoor scenes in improving scene visibility.

  18. Fusion of multi-voltage digital radiography images based on nonsubsampled contourlet transform.

    Science.gov (United States)

    Yanjie, Qi; Liming, Wang

    2016-01-01

    In order to increase the single digital radiography (DR) image information of the composite component in the industry, the different DR images are captured at different voltages so as to get the structural information at different thickness region firstly. Secondly, the original DR images are decomposed by nonsubsampled contourlet transform (NSCT), and the low-frequency subbands are fused by the role of principle component analysis (PCA), and the modified central energy role is used to carry out the high-frequency directional subbands fusion. The false edges are extracted, and the values of the high-frequency subband coefficients of the false edges are set to be a small value so as to reduce the false edges in the fusion image. Finally, the output image can be obtained by inverse nonsubsampled contourlet transform. The experimental results show that the fused DR image brings more detailed information, and the structure of the component can be seen clearly, so it is useful to the fast and accurate quality judgements of the component.

  19. MR cone-beam CT fusion image overlay for fluoroscopically guided percutaneous biopsies in pediatric patients.

    Science.gov (United States)

    Thakor, Avnesh S; Patel, Premal A; Gu, Richard; Rea, Vanessa; Amaral, Joao; Connolly, Bairbre L

    2016-03-01

    Lesions only visible on magnetic resonance (MR) imaging cannot easily be targeted for image-guided biopsy using ultrasound or X-rays but instead require MR guidance with MR-compatible needles and long procedure times (acquisition of multiple MR sequences). We developed an alternative method for performing these difficult biopsies in a standard interventional suite, by fusing MR with cone-beam CT images. The MR cone-beam CT fusion image is then used as an overlay to guide a biopsy needle to the target area under live fluoroscopic guidance. Advantages of this technique include (i) the ability for it to be performed in a conventional interventional suite, (ii) three-dimensional planning of the needle trajectory using cross-sectional imaging, (iii) real-time fluoroscopic guidance for needle trajectory correction and (iv) targeting within heterogeneous lesions based on MR signal characteristics to maximize the potential biopsy yield.

  20. A MEDICAL MULTI-MODALITY IMAGE FUSION OF CT/PET WITH PCA, DWT METHODS

    Directory of Open Access Journals (Sweden)

    S. Guruprasad

    2013-11-01

    Full Text Available This paper gives a view on the fusion of different modality images like PET and CT (Positron Emission Tomography & Computed Tomography by two domain methods PCA and DWT methods. The spatial domain is PCA method, and another transformation domain method (DWT. In dwt decomposed coefficients of DWT (discrete wavelet transformation are applied with the IDWT to get fused image information. Before that, choose a detailed part of decomposed coefficients by maximum selection and averaging the approximated part of DWT coefficients. In applying the PCA using eigen values and eigen vector of larger values as principal components and after to reconstruct using addition to these to get the fussed image of two modalities CT & PET. So that adds complimentary features of both anatomic, physiological and metabolic information in one image, provides better visual information in single image of patients in medical field. The analytic parameters like, MSE, PSNR, ENTROPY results are better enough to prove the methods each other.

  1. Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition

    National Research Council Canada - National Science Library

    Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Şahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E; Fenyö, Eva Maria

    2014-01-01

    .... Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay...

  2. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan

    Directory of Open Access Journals (Sweden)

    Shun-Yi Wang

    2016-01-01

    Conclusion: Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  3. A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering

    Directory of Open Access Journals (Sweden)

    Zhiqin Zhu

    2016-11-01

    Full Text Available The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods.

  4. Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.

    Science.gov (United States)

    Nath, Abhigyan; Subbiah, Karthikeyan

    2015-12-01

    Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance

  5. Clinical outcomes following spinal fusion using an intraoperative computed tomographic 3D imaging system.

    Science.gov (United States)

    Xiao, Roy; Miller, Jacob A; Sabharwal, Navin C; Lubelski, Daniel; Alentado, Vincent J; Healy, Andrew T; Mroz, Thomas E; Benzel, Edward C

    2017-03-03

    OBJECTIVE Improvements in imaging technology have steadily advanced surgical approaches. Within the field of spine surgery, assistance from the O-arm Multidimensional Surgical Imaging System has been established to yield superior accuracy of pedicle screw insertion compared with freehand and fluoroscopic approaches. Despite this evidence, no studies have investigated the clinical relevance associated with increased accuracy. Accordingly, the objective of this study was to investigate the clinical outcomes following thoracolumbar spinal fusion associated with O-arm-assisted navigation. The authors hypothesized that increased accuracy achieved with O-arm-assisted navigation decreases the rate of reoperation secondary to reduced hardware failure and screw misplacement. METHODS A consecutive retrospective review of all patients who underwent open thoracolumbar spinal fusion at a single tertiary-care institution between December 2012 and December 2014 was conducted. Outcomes assessed included operative time, length of hospital stay, and rates of readmission and reoperation. Mixed-effects Cox proportional hazards modeling, with surgeon as a random effect, was used to investigate the association between O-arm-assisted navigation and postoperative outcomes. RESULTS Among 1208 procedures, 614 were performed with O-arm-assisted navigation, 356 using freehand techniques, and 238 using fluoroscopic guidance. The most common indication for surgery was spondylolisthesis (56.2%), and most patients underwent a posterolateral fusion only (59.4%). Although O-arm procedures involved more vertebral levels compared with the combined freehand/fluoroscopy cohort (4.79 vs 4.26 vertebral levels; p fusion only (HR 0.39; p fusion (HR 0.22; p = 0.03), but not posterior/transforaminal lumbar interbody fusion. CONCLUSIONS To the authors' knowledge, the present study is the first to investigate clinical outcomes associated with O-arm-assisted navigation following thoracolumbar spinal fusion. O

  6. Technique for gray-scale visual light and infrared image fusion based on non-subsampled shearlet transform

    Science.gov (United States)

    Kong, Weiwei

    2014-03-01

    A novel image fusion technique based on NSST (non-subsampled shearlet transform) is presented, aiming at resolving the fusion problem of spatially gray-scale visual light and infrared images. NSST, as a new member of MGA (multi-scale geometric analysis) tools, possesses not only flexible direction features and optimal shift-invariance, but much better fusion performance and lower computational costs compared with several current popular MGA tools such as NSCT (non-subsampled contourlet transform). We specifically propose new rules for the fusion of low and high frequency sub-band coefficients of source images in the second step of the NSST-based image fusion algorithm. First, the source images are decomposed into different scales and directions using NSST. Then, the model of region average energy (RAE) is proposed and adopted to fuse the low frequency sub-band coefficients of the gray-scale visual light and infrared images. Third, the model of local directional contrast (LDC) is given and utilized to fuse the corresponding high frequency sub-band coefficients. Finally, the final fused image is obtained by using inverse NSST to all fused sub-images. In order to verify the effectiveness of the proposed technique, several current popular ones are compared over three different publicly available image sets using four evaluation metrics, and the experimental results demonstrate that the proposed technique performs better in both subjective and objective qualities.

  7. Test technology on divergence angle of laser range finder based on CCD imaging fusion

    Science.gov (United States)

    Shi, Sheng-bing; Chen, Zhen-xing; Lv, Yao

    2016-09-01

    Laser range finder has been equipped with all kinds of weapons, such as tank, ship, plane and so on, is important component of fire control system. Divergence angle is important performance and incarnation of horizontal resolving power for laser range finder, is necessary appraised test item in appraisal test. In this paper, based on high accuracy test on divergence angle of laser range finder, divergence angle test system is designed based on CCD imaging, divergence angle of laser range finder is acquired through fusion technology for different attenuation imaging, problem that CCD characteristic influences divergence angle test is solved.

  8. Implicit Beliefs about Ideal Body Image Predict Body Image Dissatisfaction

    Directory of Open Access Journals (Sweden)

    Niclas eHeider

    2015-10-01

    Full Text Available We examined whether implicit measures of actual and ideal body image can be used to predict body dissatisfaction in young female adults. Participants completed two Implicit Relational Assessment Procedures (IRAPs to examine their implicit beliefs concerning actual (e.g., I am thin and desired ideal body image (e.g., I want to be thin. Body dissatisfaction was examined via self-report questionnaires and rating scales. As expected, differences in body dissatisfaction exerted a differential influence on the two IRAP scores. Specifically, the implicit belief that one is thin was lower in participants who exhibited a high degree of body dissatisfaction than in participants who exhibited a low degree of body dissatisfaction. In contrast, the implicit desire to be thin (i.e., thin ideal body image was stronger in participants who exhibited a high level of body dissatisfaction than in participants who were less dissatisfied with their body. Adding further weight to the idea that both IRAP measures captured different underlying constructs, we also observed that they correlated differently with body mass index, explicit body dissatisfaction, and explicit measures of actual and ideal body image. More generally, these findings underscore the advantage of using implicit measures that incorporate relational information relative to implicit measures that allow for an assessment of associative relations only.

  9. Synthetic aperture microwave imaging with active probing for fusion plasma diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Shevchenko, Vladimir F.; Freethy, Simon J.; Huang, Billy K. [EURATOM/CCFE Fusion Association, Culham, Abingdon, Oxon, 0X14 3DB (United Kingdom); Vann, Roddy G. L. [York Plasma Institute, Dept. of Physics, University of York, York YO10 5DD (United Kingdom)

    2014-08-21

    A Synthetic Aperture Microwave Imaging (SAMI) system has been designed and built to obtain 2-D images at several frequencies from fusion plasmas. SAMI uses a phased array of linearly polarised antennas. The array configuration has been optimised to achieve maximum synthetic aperture beam efficiency. The signals received by antennas are down-converted to the intermediate frequency range and then recorded in a full vector form. Full vector signals allow beam focusing and image reconstruction in both real time and a post-processing mode. SAMI can scan over 16 pre-programmed frequencies in the range of 10-35GHz with a switching time of 300ns. The system operates in 2 different modes simultaneously: both a 'passive' imaging of plasma emission and also an 'active' imaging of the back-scattered signal of the radiation launched by one of the antennas from the same array. This second mode is similar to so-called Doppler backscattering (DBS) reflectometry with 2-D resolution of the propagation velocity of turbulent structures. Both modes of operation show good performance in fusion plasma experiments on Mega Amp Spherical Tokamak (MAST). We have obtained the first ever 2-D images of BXO mode conversion windows. With active probing, first ever turbulence velocity maps have been obtained. We present an overview of the diagnostic and discuss recent results. In contrast to quasi-optical microwave imaging systems SAMI requires neither big aperture viewing ports nor large 2-D detector arrays to achieve the desired imaging resolution. The number of effective 'pixels' of the synthesized image is proportional to the number of receiving antennas squared. Thus only a small number of optimised antennas is sufficient for the majority of applications. Possible implementation of SAMI on ITERand DEMO is discussed.

  10. Synthetic aperture microwave imaging with active probing for fusion plasma diagnostics

    Science.gov (United States)

    Shevchenko, Vladimir F.; Freethy, Simon J.; Huang, Billy K.; Vann, Roddy G. L.

    2014-08-01

    A Synthetic Aperture Microwave Imaging (SAMI) system has been designed and built to obtain 2-D images at several frequencies from fusion plasmas. SAMI uses a phased array of linearly polarised antennas. The array configuration has been optimised to achieve maximum synthetic aperture beam efficiency. The signals received by antennas are down-converted to the intermediate frequency range and then recorded in a full vector form. Full vector signals allow beam focusing and image reconstruction in both real time and a post-processing mode. SAMI can scan over 16 pre-programmed frequencies in the range of 10-35GHz with a switching time of 300ns. The system operates in 2 different modes simultaneously: both a 'passive' imaging of plasma emission and also an 'active' imaging of the back-scattered signal of the radiation launched by one of the antennas from the same array. This second mode is similar to so-called Doppler backscattering (DBS) reflectometry with 2-D resolution of the propagation velocity of turbulent structures. Both modes of operation show good performance in fusion plasma experiments on Mega Amp Spherical Tokamak (MAST). We have obtained the first ever 2-D images of BXO mode conversion windows. With active probing, first ever turbulence velocity maps have been obtained. We present an overview of the diagnostic and discuss recent results. In contrast to quasi-optical microwave imaging systems SAMI requires neither big aperture viewing ports nor large 2-D detector arrays to achieve the desired imaging resolution. The number of effective 'pixels' of the synthesized image is proportional to the number of receiving antennas squared. Thus only a small number of optimised antennas is sufficient for the majority of applications. Possible implementation of SAMI on ITERand DEMO is discussed.

  11. Observation of Interspecies Ion Separation in Inertial-Confinement-Fusion Implosions via Imaging X-Ray Spectroscopy

    CERN Document Server

    Hsu, S C; Hakel, P; Vold, E L; Schmitt, M J; Hoffman, N M; Rauenzahn, R M; Kagan, G; Tang, X -Z; Mancini, R C; Kim, Y; Herrmann, H W

    2016-01-01

    We report direct experimental evidence of interspecies ion separation in direct-drive, inertial-confinement-fusion experiments on the OMEGA laser facility. These experiments, which used plastic capsules with D$_2$/Ar gas fill (1% Ar by atom), were designed specifically to reveal interspecies ion separation by exploiting the predicted, strong ion thermo-diffusion between ion species of large mass and charge difference. Via detailed analyses of imaging x-ray-spectroscopy data, we extract Ar-atom-fraction radial profiles at different times, and observe both enhancement and depletion compared to the initial 1%-Ar gas fill. The experimental results are interpreted with radiation-hydrodynamic simulations that include recently implemented, first-principles models of interspecies ion diffusion. The experimentally inferred Ar-atom-fraction profiles agree reasonably, but not exactly, with calculated profiles associated with the incoming and rebounding first shock.

  12. Feasibility of Extracted-Overlay Fusion Imaging for Intraoperative Treatment Evaluation of Radiofrequency Ablation for Hepatocellular Carcinoma.

    Science.gov (United States)

    Makino, Yuki; Imai, Yasuharu; Igura, Takumi; Kogita, Sachiyo; Sawai, Yoshiyuki; Fukuda, Kazuto; Iwamoto, Takayuki; Okabe, Junya; Takamura, Manabu; Fujita, Norihiko; Hori, Masatoshi; Takehara, Tetsuo; Kudo, Masatoshi; Murakami, Takamichi

    2016-10-01

    Extracted-overlay fusion imaging is a novel computed tomography/magnetic resonance-ultrasonography (CT/MR-US) imaging technique in which a target tumor with a virtual ablative margin is extracted from CT/MR volume data and synchronously overlaid on US images. We investigated the applicability of the technique to intraoperative evaluation of radiofrequency ablation (RFA) for hepatocellular carcinoma (HCC). This retrospective study analyzed 85 HCCs treated with RFA using extracted-overlay fusion imaging for guidance and evaluation. To perform RFA, an electrode was inserted targeting the tumor and a virtual 5-mm ablative margin overlaid on the US image. Following ablation, contrast-enhanced US (CEUS) was performed to assess the ablative margin, and the minimal ablative margins were categorized into three groups: (I) margin overlay fusion imaging and CT-CT/MR-MR fusion imaging were in agreement for 72 tumors (91.1%) (Cohen's quadratic-weighted kappa coefficient 0.66, good agreement, poverlay fusion imaging combined with CEUS is feasible for the evaluation of RFA and enables intraoperative treatment evaluation without the need to perform contrast-enhanced CT.

  13. An object tracking method based on guided filter for night fusion image

    Science.gov (United States)

    Qian, Xiaoyan; Wang, Yuedong; Han, Lei

    2016-01-01

    Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.

  14. Image Processing on Geological Data in Vector Format and Multi-Source Spatial Data Fusion

    Institute of Scientific and Technical Information of China (English)

    Liu Xing; Hu Guangdao; Qiu Yubao

    2003-01-01

    The geological data are constructed in vector format in geographical information system(GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.

  15. Multi-focus Image Fusion by SML in the Shearlet Subbands

    Directory of Open Access Journals (Sweden)

    Jianguo Yang

    2013-07-01

    Full Text Available It is now widely acknowledged that traditional wavelets are not very effective in dealing with multidimensional signals containing distributed discontinuities. Shearlet Transform is a new discrete multiscale directional representation, which combines the power of multiscale methods with a unique ability to capture the geometry of multidimensional data and is optimally efficient in representing images containing edges. In this work, coefficients with greater Sum-Modified-Laplacian are selected to combine images when high-frequency and low-frequency Shearlet subbands of source images are compared. Numerical experiments demonstrate that the method base on Shearlet Transform and Sum-Modified-Laplacian is very competitive and better than other multi-scale geometric analysis tools in multifocus image fusion both in terms of objectives performance and objective criteria.

  16. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  17. Multisensor fusion in gastroenterology domain through video and echo endoscopic image combination: a challenge

    Science.gov (United States)

    Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian

    2001-08-01

    Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.

  18. 基于NSCT-PCNN算法的雷达图像融合研究%Research of radar image fusion based on NSCT-PCNN algorithm

    Institute of Scientific and Technical Information of China (English)

    傅圣雪; 林忠宇

    2012-01-01

    为了提高雷达图像的融合质量,创新性地将非下采样Contourlet变换(NSCT)与脉冲耦合神经网络(PCNN)相结合,运用到可见光和红外雷达图像的融合中.先对待融合的两幅源图像进行NSCT分解,利用得到的低频子带系数去触发PCNN的神经元,最后进行NSCT重构,得到所需要的新图像.结果表明此方法较传统的融合方法,提高了信息量和清晰度,获得了较好的识别率.此方法得到的图像更有利于对流云形成时的预测.%In order to improve the quality of the radar image fusion. the method to combine Nonsubsample Contourlet Translation (NSCT) with Pulse Coupled Neural Network (PCNN) for visible and infrared radar image fusion is innovationally adopted. The process of this method is: the two source images under fusion are decomposed by NSCT, then the low-frequency subband coefficient is utilized to trigger on PCNN neurons, and finally the image is reconstructed with NSCT to obtain a new required image. The results indicate that, compared with traditional fusion methods, this method has higher information capacity, clarity and identification rate. The conclution is that the image got by this method is more advantageous to predict when the convective cloud is generated.

  19. Modeling and Prediction of Coal Ash Fusion Temperature based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Miao Suzhen

    2016-01-01

    Full Text Available Coal ash is the residual generated from combustion of coal. The ash fusion temperature (AFT of coal gives detail information on the suitability of a coal source for gasification procedures, and specifically to which extent ash agglomeration or clinkering is likely to occur within the gasifier. To investigate the contribution of oxides in coal ash to AFT, data of coal ash chemical compositions and Softening Temperature (ST in different regions of China were collected in this work and a BP neural network model was established by XD-APC PLATFORM. In the BP model, the inputs were the ash compositions and the output was the ST. In addition, the ash fusion temperature prediction model was obtained by industrial data and the model was generalized by different industrial data. Compared to empirical formulas, the BP neural network obtained better results. By different tests, the best result and the best configurations for the model were obtained: hidden layer nodes of the BP network was setted as three, the component contents (SiO2, Al2O3, Fe2O3, CaO, MgO were used as inputs and ST was used as output of the model.

  20. Quotient Based Multiresolution Image Fusion of Thermal and Visual Images Using Daubechies Wavelet Transform for Human Face Recognition

    Directory of Open Access Journals (Sweden)

    Mrinal Kanti Bhowmik

    2010-05-01

    Full Text Available This paper investigates Quotient based Fusion of thermal and visual images, which were individually passed through level-1 and level-2 multiresolution analyses. In the proposed system, the method-1 namely "Decompose then Quotient Fuse Level-1" and the method-2 namely "Decompose-Reconstruct in level-2 and then Fuse Quotients", both work on wavelet transformations of the visual and thermal face images. The wavelet transform is well-suited to manage different image resolutions and allows the image decomposition in different kinds of coefficients, while preserving the image information without any loss. This approach is based on a definition of an illumination invariant signature image which enables an analytic generation of the image space with varying illumination. The quotient fused images are passed through Principal Component Analysis (PCA for dimension reduction and then those images are classified using a multi-layer perceptron (MLP. The performances of both the methods have been evaluated using OTCBVS and IRIS databases. All the different classes have been tested separately, among them the maximum recognition result for a class is 100% and the minimum recognition rate for a class is 73%.

  1. Fusion of cone-beam CT and 3D photographic images for soft tissue simulation in maxillofacial surgery

    Science.gov (United States)

    Chung, Soyoung; Kim, Joojin; Hong, Helen

    2016-03-01

    During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.

  2. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Haefner, Andrew; Mihailescu, Lucian [Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States)

    2015-11-11

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  3. Automatically Identifying Fusion Events between GLUT4 Storage Vesicles and the Plasma Membrane in TIRF Microscopy Image Sequences

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2015-01-01

    Full Text Available Quantitative analysis of the dynamic behavior about membrane-bound secretory vesicles has proven to be important in biological research. This paper proposes a novel approach to automatically identify the elusive fusion events between VAMP2-pHluorin labeled GLUT4 storage vesicles (GSVs and the plasma membrane. The differentiation is implemented to detect the initiation of fusion events by modified forward subtraction of consecutive frames in the TIRFM image sequence. Spatially connected pixels in difference images brighter than a specified adaptive threshold are grouped into a distinct fusion spot. The vesicles are located at the intensity-weighted centroid of their fusion spots. To reveal the true in vivo nature of a fusion event, 2D Gaussian fitting for the fusion spot is used to derive the intensity-weighted centroid and the spot size during the fusion process. The fusion event and its termination can be determined according to the change of spot size. The method is evaluated on real experiment data with ground truth annotated by expert cell biologists. The evaluation results show that it can achieve relatively high accuracy comparing favorably to the manual analysis, yet at a small fraction of time.

  4. The Potential Use of Ultrasound-Magnetic Resonance Imaging Fusion Applications in Musculoskeletal Intervention.

    Science.gov (United States)

    Burke, Christopher J; Bencardino, Jenny; Adler, Ronald

    2017-01-01

    We sought to assess the potential use of an application allowing real-time ultrasound spatial registration with previously acquired magnetic resonance imaging in musculoskeletal procedures. The ultrasound fusion application was used to perform a range of outpatient procedures including piriformis, sacroiliac joint, pudendal and intercostal nerve perineurial injections, hamstring-origin calcific tendonopathy barbotage, and 2 soft tissue biopsies at our institution in 2015. The application was used in a total of 7 procedures in 7 patients, all of which were technically successful. The ages of patients ranged from 19 to 86 years. Particular use of the fusion application compared to sonography alone was noted in the biopsy of certain soft tissue lesions and in perineurial therapeutic injections. © 2016 by the American Institute of Ultrasound in Medicine.

  5. Three-dimensional reconstruction of subject-specific knee joint using computed tomography and magnetic resonance imaging image data fusions.

    Science.gov (United States)

    Dong, Yuefu; Mou, Zhifang; Huang, Zhenyu; Hu, Guanghong; Dong, Yinghai; Xu, Qingrong

    2013-10-01

    Three-dimensional reconstruction of human body from a living subject can be considered as the first step toward promoting virtual human project as a tool in clinical applications. This study proposes a detailed protocol for building subject-specific three-dimensional model of knee joint from a living subject. The computed tomography and magnetic resonance imaging image data of knee joint were used to reconstruct knee structures, including bones, skin, muscles, cartilages, menisci, and ligaments. They were fused to assemble the complete three-dimensional knee joint. The procedure was repeated three times with respect to three different methods of reference landmarks. The accuracy of image fusion in accordance with different landmarks was evaluated and compared with each other. The complete three-dimensional knee joint, which included 21 knee structures, was accurately developed. The choice of external or anatomical landmarks was not crucial to improve image fusion accuracy for three-dimensional reconstruction. Further work needs to be done to explore the value of the reconstructed three-dimensional knee joint for its biomechanics and kinematics.

  6. Image fusion of microwave and optical remote sensing data for topographic map updating in the tropics

    Science.gov (United States)

    Pohl, Christine; van Genderen, John L.

    1995-11-01

    Temporal monitoring using remote sensing for topographic mapping requires continuous acquisition of image data. In many countries, but especially in the human Tropics, the heavy cloud cover is a major drawback for visible and infrared remote sensing. The research project presented in this paper uses the idea of integrating data from optical and microwave sensors using digital image fusion techniques to overcome the cloud cover problem. Additionally the combination of radar with optical data increases the interpretation capabilities and the reliability of the results due to the complementary nature of microwave and optical images. While optical data represents the reflectance of ground cover in visible and near-infrared, the radar is very sensitive to the shape, orientation, roughness and moisture content of the illuminated ground objects. This research investigates the geometric aspect of image fusion for topographic map updating. The paper describes experiences gained from an area in the north of The Netherlands (`Friesland') as calibration test site in comparison with first results from the research test site (`Bengkulu'), located on the south west coast of Sumatra in Indonesia. The data used for this investigated was acquired by SPOT, Landsat, ERS-1 and JERS-1.

  7. A fusion algorithm for remote sensing images based on nonsubsampled pyramids and bidimensional empirical decomposition

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In order to improve the quality of remote sensing image fusion,a new method combining nonsubsampled Laplacian pyramid (NLP)and bidimensional empirical mode decomposition(BEMD)is proposed.First,the high resolution panchromatic image (PAN)is decomposed using NLP until the approximate component and the low resolution multispectral image(MS)contain features with a similar scale.Then,the approximation component and the MS are decomposed by BEMD,resulting in a number of bidimensional intrinsic mode functions(BIMF)and a residue respectively.The instantaneous frequency is computed in 4 directions of the BIMFs.Considering the positive or negative coefficients in the corresponding position,a weighted algorithm is designed for fusing the high frequency details using the instantaneous frequency and the coefficient absolute value of the BIMFs as fusion feature.The fused image is then obtained through inverse BEMD and NLP.Experimental results have illustrated the advantage of this method over the IHS,DWT andà-Trous wavelet in both spectral and spatial detail qualities.

  8. Noise temperature improvement for magnetic fusion plasma millimeter wave imaging systems

    Energy Technology Data Exchange (ETDEWEB)

    Lai, J.; Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California at Davis, Davis, California 95616 (United States)

    2014-03-15

    Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas [B. Tobias et al., Plasma Fusion Res. 6, 2106042 (2011)]. Of particular importance have been microwave electron cyclotron emission imaging and microwave imaging reflectometry systems for imaging T{sub e} and n{sub e} fluctuations. These instruments have employed heterodyne receiver arrays with Schottky diode mixer elements directly connected to individual antennas. Consequently, the noise temperature has been strongly determined by the conversion loss with typical noise temperatures of ∼60 000 K. However, this can be significantly improved by making use of recent advances in Monolithic Microwave Integrated Circuit chip low noise amplifiers to insert a pre-amplifier in front of the Schottky diode mixer element. In a proof-of-principle design at V-Band (50–75 GHz), significant improvement of noise temperature from the current 60 000 K to measured 4000 K has been obtained.

  9. Orthogonal Rings, Fiducial Markers, and Overlay Accuracy When Image Fusion is Used for EVAR Guidance.

    Science.gov (United States)

    Koutouzi, G; Sandström, C; Roos, H; Henrikson, O; Leonhardt, H; Falkenberg, M

    2016-11-01

    Evaluation of orthogonal rings, fiducial markers, and overlay accuracy when image fusion is used for endovascular aortic repair (EVAR). This was a prospective single centre study. In 19 patients undergoing standard EVAR, 3D image fusion was used for intra-operative guidance. Renal arteries and targeted stent graft positions were marked with rings orthogonal to the respective centre lines from pre-operative computed tomography (CT). Radiopaque reference objects attached to the back of the patient were used as fiducial markers to detect patient movement intra-operatively. Automatic 3D-3D registration of the pre-operative CT with an intra-operative cone beam computed tomography (CBCT) as well as 3D-3D registration after manual alignment of nearby vertebrae were evaluated. Registration was defined as being sufficient for EVAR guidance if the deviation of the origin of the lower renal artery was less than 3 mm. For final overlay registration, the renal arteries were manually aligned using aortic calcification and vessel outlines. The accuracy of the overlay before stent graft deployment was evaluated using digital subtraction angiography (DSA) as direct comparison. Fiducial markers helped in detecting misalignment caused by patient movement during the procedure. Use of automatic intensity based registration alone was insufficient for EVAR guidance. Manual registration based on vertebrae L1-L2 was sufficient in 7/19 patients (37%). Using the final adjusted registration as overlay, the median alignment error of the lower renal artery marking at pre-deployment DSA was 2 mm (0-5) sideways and 2 mm (0-9) longitudinally, mostly in a caudal direction. 3D image fusion can facilitate intra-operative guidance during EVAR. Orthogonal rings and fiducial markers are useful for visualization and overlay correction. However, the accuracy of the overlaid 3D image is not always ideal and further technical development is needed. Copyright © 2016 European Society for Vascular Surgery

  10. A fusion method for visible and infrared images based on contrast pyramid with teaching learning based optimization

    Science.gov (United States)

    Jin, Haiyan; Wang, Yanyan

    2014-05-01

    This paper proposes a novel image fusion scheme based on contrast pyramid (CP) with teaching learning based optimization (TLBO) for visible and infrared images under different spectrum of complicated scene. Firstly, CP decomposition is employed into every level of each original image. Then, we introduce TLBO to optimizing fusion coefficients, which will be changed under teaching phase and learner phase of TLBO, so that the weighted coefficients can be automatically adjusted according to fitness function, namely the evaluation standards of image quality. At last, obtain fusion results by the inverse transformation of CP. Compared with existing methods, experimental results show that our method is effective and the fused images are more suitable for further human visual or machine perception.

  11. Image fusion in open-architecture quality-oriented nuclear medicine and radiology departments

    Energy Technology Data Exchange (ETDEWEB)

    Pohjonen, H

    1997-12-31

    Imaging examinations of patients belong to the most widely used diagnostic procedures in hospitals. Multimodal digital imaging is becoming increasingly common in many fields of diagnosis and therapy planning. Patients are frequently examined with magnetic resonance imaging (MRI), X-ray computed tomography (CT) or ultrasound imaging (US) in addition to single photon (SPET) or positron emission tomography (PET). The aim of the study was to provide means for improving the quality of the whole imaging and viewing chain in nuclear medicine and radiology. The specific aims were: (1) to construct and test a model for a quality assurance system in radiology based on ISO standards, (2) to plan a Dicom based image network for fusion purposes using ATM and Ethernet technologies, (3) to test different segmentation methods in quantitative SPET, (4) to study and implement a registration and visualisation method for multimodal imaging, (5) to apply the developed method in selected clinical brain and abdominal images, and (6) to investigate the accuracy of the registration procedure for brain SPET and MRI 90 refs. The thesis includes also six previous publications by author

  12. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    Science.gov (United States)

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-01-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  13. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    Energy Technology Data Exchange (ETDEWEB)

    Gengler, Sarah, E-mail: sarahgengler@gmail.com; Bogaert, Patrick, E-mail: sarahgengler@gmail.com [Earth and Life Institute, Environmental Sciences. Université catholique de Louvain, Croix du Sud 2/L7.05.16, B-1348 Louvain-la-Neuve (Belgium)

    2014-12-05

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.

  14. Data fusion for planning target volume and isodose prediction in prostate brachytherapy

    Science.gov (United States)

    Nouranian, Saman; Ramezani, Mahdi; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, Septimiu E.; Abolmaesumi, Purang

    2015-03-01

    In low-dose prostate brachytherapy treatment, a large number of radioactive seeds is implanted in and adjacent to the prostate gland. Planning of this treatment involves the determination of a Planning Target Volume (PTV), followed by defining the optimal number of seeds, needles and their coordinates for implantation. The two major planning tasks, i.e. PTV determination and seed definition, are associated with inter- and intra-expert variability. Moreover, since these two steps are performed in sequence, the variability is accumulated in the overall treatment plan. In this paper, we introduce a model based on a data fusion technique that enables joint determination of PTV and the minimum Prescribed Isodose (mPD) map. The model captures the correlation between different information modalities consisting of transrectal ultrasound (TRUS) volumes, PTV and isodose contours. We take advantage of joint Independent Component Analysis (jICA) as a linear decomposition technique to obtain a set of joint components that optimally describe such correlation. We perform a component stability analysis to generate a model with stable parameters that predicts the PTV and isodose contours solely based on a new patient TRUS volume. We propose a framework for both modeling and prediction processes and evaluate it on a dataset of 60 brachytherapy treatment records. We show PTV prediction error of 10:02+/-4:5% and the V100 isodose overlap of 97+/-3:55% with respect to the clinical gold standard.

  15. Prediction of Chinese coal ash fusion temperatures in Ar and H{sub 2} atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Wen J. Song; Li H. Tang; Xue D. Zhu; Yong Q. Wu; Zi B. Zhu; Shuntarou Koyama [East China University of Science and Technology, Shanghai (China)

    2009-04-15

    The ash fusion temperatures (AFTs) of 21 typical Chinese coal ash samples and 60 synthetic ash samples were measured in Ar and H{sub 2} atmospheres. The computer software package FactSage was used to calculate the temperatures corresponding to different proportions of the liquid phase and predict the phase equilibria of synthetic ash samples. Empirical liquidus models were derived to correlate the AFTs under both Ar and H{sub 2} atmospheres of 60 synthetic ash samples, with their liquidus temperatures calculated by FactSage. These models were used to predict the AFTs of 21 Chinese coal ash samples in Ar and H{sub 2} atmospheres, and then the AFT differences between the atmospheres were analyzed. The results show that, for both atmospheres, there was an apparently linear correlation and good agreement between the AFTs of synthetic ash samples and the liquidus temperatures calculated by FactSage (R > 0.89, and {sigma} < 30{sup o}C). These models predict the AFTs of coal ash samples with a high level of accuracy (SE < 30{sup o}C). Because the iron oxides in coal ash samples fused under a H{sub 2} atmosphere are reduced to metallic iron and lead to changes of mineral species and micromorphology, the AFTs in a H{sub 2} atmosphere are always higher than those with an Ar atmosphere. 34 refs., 9 figs., 7 tabs.

  16. Prediction of Signal Peptide Cleavage Sites with Subsite-Coupled and Template Matching Fusion Algorithm.

    Science.gov (United States)

    Zhang, Shao-Wu; Zhang, Ting-He; Zhang, Jun-Nan; Huang, Yufei

    2014-03-01

    Fast and effective prediction of signal peptides (SP) and their cleavage sites is of great importance in computational biology. The approaches developed to predict signal peptide can be roughly divided into machine learning based, and sliding windows based. In order to further increase the prediction accuracy and coverage of organism for SP cleavage sites, we propose a novel method for predicting SP cleavage sites called Signal-CTF that utilizes machine learning and sliding windows, and is designed for N-termial secretory proteins in a large variety of organisms including human, animal, plant, virus, bacteria, fungi and archaea. Signal-CTF consists of three distinct elements: (1) a subsite-coupled and regularization function with a scaled window of fixed width that selects a set of candidates of possible secretion-cleavable segment for a query secretory protein; (2) a sum fusion system that integrates the outcomes from aligning the cleavage site template sequence with each of the aforementioned candidates in a scaled window of fixed width to determine the best candidate cleavage sites for the query secretory protein; (3) a voting system that identifies the ultimate signal peptide cleavage site among all possible results derived from using scaled windows of different width. When compared with Signal-3L and SignalP 4.0 predictors, the prediction accuracy of Signal-CTF is 4-12 %, 10-25 % higher than that of Signal-3L for human, animal and eukaryote, and SignalP 4.0 for eukaryota, Gram-positive bacteria and Gram-negative bacteria, respectively. Comparing with PRED-SIGNAL and SignalP 4.0 predictors on the 32 archaea secretory proteins of used in Bagos's paper, the prediction accuracy of Signal-CTF is 12.5 %, 25 % higher than that of PRED-SIGNAL and SignalP 4.0, respectively. The predicting results of several long signal peptides show that the Signal-CTF can better predict cleavage sites for long signal peptides than SignalP, Phobius, Philius, SPOCTOPUS, Signal

  17. Paradoxical fusion of two images and depth perception with a squinting eye.

    Science.gov (United States)

    Rychkova, S I; Ninio, J

    2009-03-01

    Some strabismic patients with inconstant squint can fuse two images in a single eye, and experience lustre and depth. One of these images is foveal and the other extrafoveal. Depth perception was tested on 30 such subjects. Relief was perceived mostly on the fixated image. Camouflaged continuous surfaces (hemispheres, cylinders) were perceived as bumps or hollows, without detail. Camouflaged rectangles could not be separated in depth from the background, while their explicit counterparts could. Slanted bars were mostly interpreted as frontoparallel near or remote bars. Depth responses were more frequent with stimuli involving inward rather than outward disparities, and were then heavily biased towards "near" judgements. All monocular fusion effects were markedly reduced after the recovery of normal stereoscopic vision following an orthoptic treatment. The depth effects reported here may provide clues on what stereoscopic pathways may or may not accomplish with incomplete retinal and misleading vergence information.

  18. Airborne Infrared and Visible Image Fusion for Target Perception Based on Target Region Segmentation and Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Yifeng Niu

    2012-01-01

    Full Text Available Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs, then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.

  19. New image fusion method applied in two-wavelength detection of biochip spots

    Science.gov (United States)

    Chang, Rang-Seng; Sheu, Jin-Yi; Lin, Ching-Huang

    2001-09-01

    In the biological systems genetic information is read, stored, modified, transcribed and translated using the rule of molecular recognition. Every nucleic acid strand carries the capacity to recognize complementary sequences through base paring. Molecular biologists commonly use the DNA probes with known sequence to identify the unknown sequence through hybridization. There are many different detection methods for the hybridization results on a genechip. Fluorescent detection is a conventional method. The data analysis based on the fluorescent images and database establishment is necessary for treatment of such a large-amount obtained from a genechip. The unknown sequence has labeled with fluorescent material. Since the excitation and emission band is not a theoretical narrow band. There is a different in emission windows for different microscope. Therefore the data reading is different for different microscope. We combine two narrow band emission data and take it as two wavelengths from one fluorescence. Which by corresponding UV light excitation after we read the fluorescent intensity distribution of two microscope wavelengths for same hybridization DNA sequence spot, we will use image fusion technology to get best resultsDWe introduce a contrast and aberration correction image fusion method by using discrete wavelet transform to two wavelengths identification microarray biochip. This method includes two parts. First, the multiresolution analysis of the two input images are obtained by the discrete wavelet transform, from the ratio of high frequencies to the low frequency on the corresponding spatial resolution level, the directive contrast can be estimated by selecting the suitable subband signals of each input image. The fused image is reconstructed using the inverse wavelet transform.

  20. CLASSIFICATION OF DIGITAL IMAGES USING FUSION ELEVATED ORDER CLASSIFIER IN WAVELET NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Arulmurugan

    2014-01-01

    Full Text Available The revival of wavelet neural networks obtained an extensive use in digital image processing. The shape representation, classification and detection play a very important role in the image analysis. Boosted Greedy Sparse Linear Discriminate Analysis (BGSLDA trains the cascade level of detection in an efficient manner. With the application of reweighting concept and deployment of class-reparability criterion, lesser search was made on more efficient weak classifiers. At the same time, Multi-Scale Histogram of Oriented Gradients (MS-HOG method removes the confined portions of images. MS-HOG algorithm includes the advanced recognition scenarios such as rotations transportations on multiple objects but does not perform effective feature classification. To overcome the drawbacks in classification of higher order units, Fusion Elevated Order Classifier (FEOC method is introduced. FEOC contains a different fusion of high order units to deal with diverse datasets by making changes in the order of units with parametric considerations. FEOC uses a prominent value of input neurons for better fitting properties resulting in a higher level of learning parameters (i.e., weights. FEOC method features are reduced using feature subset collection method. However, elevation mechanisms are significantly applied to the neuron, neuron activation function type and finally in the higher order types of neural network with the functions of adaptive in nature. FEOC have evaluated sigma-pi network representing both the Elevated order Processing Unit (EPU and pi-sigma network. The experimental performance of Fusion Elevated Order Classifier in the wavelet neural network is evaluated against BGSLDA and MS-HOG using Statlog (Landsat Satellite Data Set from UCI repository. FEOC performed in MATLAB with factors such as classification accuracy rate, false positive error, computational cost, memory consumption, response time and higher order classifier rate.

  1. Hardware acceleration of lucky-region fusion (LRF) algorithm for imaging

    Science.gov (United States)

    Jackson, Christopher R.; Ejzak, Garrett A.; Aubailly, Mathieu; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2014-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors rather than single picture images. This document describes a hardware implementation of the LRF algorithm on a VIRTEX-7 field programmable gate array (FPGA) to achieve real-time image processing. The novelty in our approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link or DVI video output. We also describe a custom hardware simulation environment we have built to test our LRF implementation.

  2. A computer tool for the fusion and visualization of thermal and magnetic resonance images.

    Science.gov (United States)

    Bichinho, Gerson Linck; Gariba, Munir Antonio; Sanches, Ionildo José; Gamba, Humberto Remigio; Cruz, Felipe Pardal Franco; Nohama, Percy

    2009-10-01

    The measurement of temperature variation along the surface of the body, provided by digital infrared thermal imaging (DITI), is becoming a valuable auxiliary tool for the early detection of many diseases in medicine. However, DITI is essentially a 2-D technique and its image does not provide useful anatomical information associated with it. However, multimodal image registration and fusion may overcome this difficulty and provide additional information for diagnosis purposes. In this paper, a new method of registering and merging 2-D DITI and 3-D MRI is presented. Registration of the images acquired from the two modalities is necessary as they are acquired with different image systems. Firstly, the body volume of interest is scanned by a MRI system and a set of 2-D DITI of it, at orthogonal angles, is acquired. Next, it is necessary to register these two different sets of images. This is done by creating 2-D MRI projections from the reconstructed 3-D MRI volume and registering it with the DITI. Once registered, the DITI is then projected over the 3-D MRI. The program developed to assess the proposed method to combine MRI and DITI resulted in a new tool for fusing two different image modalities, and it can help medical doctors.

  3. Spectral-spatial fusion model for robust blood pulse waveform extraction in photoplethysmographic imaging.

    Science.gov (United States)

    Amelard, Robert; Clausi, David A; Wong, Alexander

    2016-12-01

    Photoplethysmographic imaging is an optical solution for non-contact cardiovascular monitoring from a distance. This camera-based technology enables physiological monitoring in situations where contact-based devices may be problematic or infeasible, such as ambulatory, sleep, and multi-individual monitoring. However, automatically extracting the blood pulse waveform signal is challenging due to the unknown mixture of relevant (pulsatile) and irrelevant pixels in the scene. Here, we propose a signal fusion framework, FusionPPG, for extracting a blood pulse waveform signal with strong temporal fidelity from a scene without requiring anatomical priors. The extraction problem is posed as a Bayesian least squares fusion problem, and solved using a novel probabilistic pulsatility model that incorporates both physiologically derived spectral and spatial waveform priors to identify pulsatility characteristics in the scene. Evaluation was performed on a 24-participant sample with various ages (9-60 years) and body compositions (fat% 30.0 ± 7.9, muscle% 40.4 ± 5.3, BMI 25.5 ± 5.2 kg·m(-2)). Experimental results show stronger matching to the ground-truth blood pulse waveform signal compared to the FaceMeanPPG (p waveform via temporal analysis.

  4. Classification decision tree algorithm assisting in diagnosing solitary pulmonary nodule by SPECT/CT fusion imaging

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Objective To develop a classification tree algorithm to improve diagnostic performances of 99mTc-MIBI SPECT/CT fusion imaging in differentiating solitary pulmonary nodules(SPNs).Methods Forty-four SPNs,including 30 malignant cases and 14 benign ones that were eventually pathologically identified,were included in this prospective study.All patients received 99Tcm-MIBI SPECT/CT scanning at an early stage and a delayed stage before operation.Thirty predictor variables,including 11 clinical variables,4 variable...

  5. Prompt gamma ray imaging for verification of proton boron fusion therapy: A Monte Carlo study.

    Science.gov (United States)

    Shin, Han-Back; Yoon, Do-Kun; Jung, Joo-Young; Kim, Moo-Sub; Suh, Tae Suk

    2016-10-01

    The purpose of this study was to verify acquisition feasibility of a single photon emission computed tomography image using prompt gamma rays for proton boron fusion therapy (PBFT) and to confirm an enhanced therapeutic effect of PBFT by comparison with conventional proton therapy without use of boron. Monte Carlo simulation was performed to acquire reconstructed image during PBFT. We acquired percentage depth dose (PDD) of the proton beams in a water phantom, energy spectrum of the prompt gamma rays, and tomographic images, including the boron uptake region (BUR; target). The prompt gamma ray image was reconstructed using maximum likelihood expectation maximisation (MLEM) with 64 projection raw data. To verify the reconstructed image, both an image profile and contrast analysis according to the iteration number were conducted. In addition, the physical distance between two BURs in the region of interest of each BUR was measured. The PDD of the proton beam from the water phantom including the BURs shows more efficient than that of conventional proton therapy on tumour region. A 719keV prompt gamma ray peak was clearly observed in the prompt gamma ray energy spectrum. The prompt gamma ray image was reconstructed successfully using 64 projections. Different image profiles including two BURs were acquired from the reconstructed image according to the iteration number. We confirmed successful acquisition of a prompt gamma ray image during PBFT. In addition, the quantitative image analysis results showed relatively good performance for further study. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Interpretation of remotely sensed images in a context of multisensor fusion using a multi-specialist architecture

    OpenAIRE

    Clement, Veronique; Giraudon, Gerard; Houzelle, Stéphane; Sandakly, Fadi

    1992-01-01

    This report presents a scene interpretation system in a context of multisensor fusion ; it has been applied to the interpretation of remootely sensed images. First we present a typology of the multisensor fusion concepts involved, and we derive the consequences of modeling problems for objects, scene and strategy. The proposed multi-specialist architecture generalizes the ideas of our previous work by taking into account the knowledge about sensors, the multiple viewing notion (shot), and the...

  7. Added value of contrast-enhanced ultrasound on biopsies of focal hepatic lesions invisible on fusion imaging guidance

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2017-01-15

    To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.

  8. The ties that bind: genetic relatedness predicts the fission and fusion of social groups in wild African elephants

    OpenAIRE

    Archie, Elizabeth A.; Moss, Cynthia J; Alberts, Susan C.

    2005-01-01

    Many social animals live in stable groups. In contrast, African savannah elephants (Loxodonta africana) live in unusually fluid, fission–fusion societies. That is, ‘core’ social groups are composed of predictable sets of individuals; however, over the course of hours or days, these groups may temporarily divide and reunite, or they may fuse with other social groups to form much larger social units. Here, we test the hypothesis that genetic relatedness predicts patterns of group fission and fu...

  9. Facing "the Curse of Dimensionality": Image Fusion and Nonlinear Dimensionality Reduction for Advanced Data Mining and Visualization of Astronomical Images

    Science.gov (United States)

    Pesenson, Meyer; Pesenson, I. Z.; McCollum, B.

    2009-05-01

    The complexity of multitemporal/multispectral astronomical data sets together with the approaching petascale of such datasets and large astronomical surveys require automated or semi-automated methods for knowledge discovery. Traditional statistical methods of analysis may break down not only because of the amount of data, but mostly because of the increase of the dimensionality of data. Image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to "the curse of dimensionality,” thus facilitating automated feature selection, classification and data segmentation. Dimension reduction methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and together with image fusion enable effective scientific visualization (as opposed to mere illustrative visualization). The main approach of this work utilizes recent advances in multidimensional image processing, as well as representation of essential structure of a data set in terms of its fundamental eigenfunctions, which are used as an orthonormal basis for the data visualization and analysis. We consider multidimensional data sets and images as manifolds or combinatorial graphs and construct variational splines that minimize certain Sobolev norms. These splines allow us to reconstruct the eigenfunctions of the combinatorial Laplace operator by using only a small portion of the graph. We use the first two or three eigenfunctions for embedding large data sets into two- or three-dimensional Euclidean space. Such reduced data sets allow efficient data organization, retrieval, analysis and visualization. We demonstrate applications of the algorithms to test cases from the Spitzer Space Telescope. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative

  10. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.

    Science.gov (United States)

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  11. Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    Directory of Open Access Journals (Sweden)

    Arturo de la Escalera

    2010-08-01

    Full Text Available The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem and dense disparity maps and u-v disparity (vision subsystem. Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  12. A Locomotion Intent Prediction System Based on Multi-Sensor Fusion

    Directory of Open Access Journals (Sweden)

    Baojun Chen

    2014-07-01

    Full Text Available Locomotion intent prediction is essential for the control of powered lower-limb prostheses to realize smooth locomotion transitions. In this research, we develop a multi-sensor fusion based locomotion intent prediction system, which can recognize current locomotion mode and detect locomotion transitions in advance. Seven able-bodied subjects were recruited for this research. Signals from two foot pressure insoles and three inertial measurement units (one on the thigh, one on the shank and the other on the foot are measured. A two-level recognition strategy is used for the recognition with linear discriminate classifier. Six kinds of locomotion modes and ten kinds of locomotion transitions are tested in this study. Recognition accuracy during steady locomotion periods (i.e., no locomotion transitions is 99.71% ± 0.05% for seven able-bodied subjects. During locomotion transition periods, all the transitions are correctly detected and most of them can be detected before transiting to new locomotion modes. No significant deterioration in recognition performance is observed in the following five hours after the system is trained, and small number of experiment trials are required to train reliable classifiers.

  13. Angular radiation temperature simulation for time-dependent capsule drive prediction in inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Longfei; Yang, Dong; Li, Hang; Zhang, Lu; Lin, Zhiwei; Li, Liling; Kuang, Longyu [Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang 621900 (China); Jiang, Shaoen, E-mail: jiangshn@vip.sina.com; Ding, Yongkun [Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang 621900 (China); Center for Applied Physics and Technology, Peking University, Beijing 100871 (China); Huang, Yunbao, E-mail: huangyblhy@gmail.com [Mechatronics School of Guangdong University of Technology, Guangzhou 510080 (China)

    2015-02-15

    The x-ray drive on a capsule in an inertial confinement fusion setup is crucial for ignition. Unfortunately, a direct measurement has not been possible so far. We propose an angular radiation temperature simulation to predict the time-dependent drive on the capsule. A simple model, based on the view-factor method for the simulation of the radiation temperature, is presented and compared with the experimental data obtained using the OMEGA laser facility and the simulation results acquired with VISRAD code. We found a good agreement between the time-dependent measurements and the simulation results obtained using this model. The validated model was then used to analyze the experimental results from the Shenguang-III prototype laser facility. More specifically, the variations of the peak radiation temperatures at different view angles with the albedo of the hohlraum, the motion of the laser spots, the closure of the laser entrance holes, and the deviation of the laser power were investigated. Furthermore, the time-dependent radiation temperature at different orientations and the drive history on the capsule were calculated. The results indicate that the radiation temperature from “U20W112” (named according to the diagnostic hole ID on the target chamber) can be used to approximately predict the drive temperature on the capsule. In addition, the influence of the capsule on the peak radiation temperature is also presented.

  14. Lossless compression of hyperspectral images using hybrid context prediction.

    Science.gov (United States)

    Liang, Yuan; Li, Jianping; Guo, Ke

    2012-03-26

    In this letter a new algorithm for lossless compression of hyperspectral images using hybrid context prediction is proposed. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. The decorrelation stage supports both intraband and interband predictions. The intraband (spatial) prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction. The hybrid context prediction is the combination of a linear prediction (LP) and a context prediction. Finally, the residual image of hybrid context prediction is coded by the arithmetic coding. We compare the proposed lossless compression algorithm with some of the existing algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. The performance of the proposed lossless compression algorithm is evaluated. Simulation results show that our algorithm achieves high compression ratios with low complexity and computational cost.

  15. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    Directory of Open Access Journals (Sweden)

    Lu Ping-ping

    2014-06-01

    Full Text Available Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extraction algorithm based on information fusion, which considers ratio and direction information, is also proposed. By processing Radon transformation, main road directions can be extracted. Cross interferences can be suppressed, and the road continuity can then be improved by the main direction alignment and secondary road extraction. The HJ-1-C SAR image acquired in Wuhan, China was used to evaluate the proposed method. The experimental results show good performance with correctness (80.5% and quality (70.1% when applied to a SAR image with complex content.

  16. Fusion of Hyperspectral and Vhr Multispectral Image Classifications in Urban Areas

    Science.gov (United States)

    Hervieu, Alexandre; Le Bris, Arnaud; Mallet, Clément

    2016-06-01

    An energetical approach is proposed for classification decision fusion in urban areas using multispectral and hyperspectral imagery at distinct spatial resolutions. Hyperspectral data provides a great ability to discriminate land-cover classes while multispectral data, usually at higher spatial resolution, makes possible a more accurate spatial delineation of the classes. Hence, the aim here is to achieve the most accurate classification maps by taking advantage of both data sources at the decision level: spectral properties of the hyperspectral data and the geometrical resolution of multispectral images. More specifically, the proposed method takes into account probability class membership maps in order to improve the classification fusion process. Such probability maps are available using standard classification techniques such as Random Forests or Support Vector Machines. Classification probability maps are integrated into an energy framework where minimization of a given energy leads to better classification maps. The energy is minimized using a graph-cut method called quadratic pseudo-boolean optimization (QPBO) with ?-expansion. A first model is proposed that gives satisfactory results in terms of classification results and visual interpretation. This model is compared to a standard Potts models adapted to the considered problem. Finally, the model is enhanced by integrating the spatial contrast observed in the data source of higher spatial resolution (i.e., the multispectral image). Obtained results using the proposed energetical decision fusion process are shown on two urban multispectral/hyperspectral datasets. 2-3% improvement is noticed with respect to a Potts formulation and 3-8% compared to a single hyperspectral-based classification.

  17. Quicksilver: Fast predictive image registration - A deep learning approach.

    Science.gov (United States)

    Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc

    2017-07-11

    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Nonsubsampled rotated complex wavelet transform (NSRCxWT) for medical image fusion related to clinical aspects in neurocysticercosis.

    Science.gov (United States)

    Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil

    2017-02-01

    Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in

  19. Development of an MRI fiducial marker prototype for automated MR-US fusion of abdominal images

    Science.gov (United States)

    Favazza, C. P.; Gorny, K. R.; Washburn, M. J.; Hangiandreou, N. J.

    2014-03-01

    External MRI fiducial marker devices are expected to facilitate robust, accurate, and efficient image fusion between MRI and other modalities. Automating of this process requires careful selection of a suitable marker size and material visible across a variety of pulse sequences, design of an appropriate fiducial device, and a robust segmentation algorithm. A set of routine clinical abdominal MRI pulse sequences was used to image a variety of marker materials and range of marker sizes. The most successfully detected marker was 12.7 mm diameter cylindrical reservoir filled with 1 g/L copper sulfate solution. A fiducial device was designed and fabricated from four such markers arranged in a tetrahedral orientation. MRI examinations were performed with the device attached to phantom and a volunteer, and custom developed algorithm was used to detect and segment the individual markers. The individual markers were accurately segmented in all sequences for both the phantom and volunteer. The measured intra-marker spacings matched well with the dimensions of the fiducial device. The average deviations from the actual physical spacings were 0.45+/- 0.40 mm and 0.52 +/- 0.36 mm for the phantom and the volunteer data, respectively. These preliminary results suggest that this general fiducial design and detection algorithm could be used for MRI multimodality fusion applications.

  20. A Review of Image Fusion Algorithms Based on the Super-Resolution Paradigm

    Directory of Open Access Journals (Sweden)

    Andrea Garzelli

    2016-09-01

    Full Text Available A critical analysis of remote sensing image fusion methods based on the super-resolution (SR paradigm is presented in this paper. Very recent algorithms have been selected among the pioneering studies adopting a new methodology and the most promising solutions. After introducing the concept of super-resolution and modeling the approach as a constrained optimization problem, different SR solutions for spatio-temporal fusion and pan-sharpening are reviewed and critically discussed. Concerning pan-sharpening, the well-known, simple, yet effective, proportional additive wavelet in the luminance component (AWLP is adopted as a benchmark to assess the performance of the new SR-based pan-sharpening methods. The widespread quality indexes computed at degraded resolution, with the original multispectral image used as the reference, i.e., SAM (Spectral Angle Mapper and ERGAS (Erreur Relative Globale Adimensionnelle de Synthèse, are finally presented. Considering these results, sparse representation and Bayesian approaches seem far from being mature to be adopted in operational pan-sharpening scenarios.

  1. Objective evaluation of target detectability in night vision color fusion images

    Institute of Scientific and Technical Information of China (English)

    Yihui Yuan; Junju Zhang; Benkang Chang; Hui Xu; Yiyong Han

    2011-01-01

    An evaluation for objectively assessing the target detectability in night vision color fusion images is proposed. On the assumption that target detectability could be modeled as the perceptual color variation between the target and its optimal sensitive background region, we propose an objective target detectability metric in CIELAB color space defined by four color information features: target luminance, region perceptual luminance variation in human vision system, region hue difference, and region chroma difference.Experimental results show that this proposed metric is perceptually meaningful because it corresponds well with subjective evaluation.%@@ An evaluation for objectively assessing the target detectability in night vision color fusion images is proposed. On the assumption that target detectability could be modeled as the perceptual color variation between the target and its optimal sensitive background region, we propose an objective target detectability metric in CIELAB color space defined by four color information features: target luminance, region perceptual luminance variation in human vision system, region hue difference, and region chroma difference.Experimental results show that this proposed metric is perceptually meaningful because it corresponds well with subjective evaluation.

  2. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  3. Change Detection of High Resolution SAR Images by the Fusion of Coherent/Incoherent Information

    Directory of Open Access Journals (Sweden)

    Yang Xiang-li

    2015-10-01

    Full Text Available Aiming at detecting the change regions of high resolution Synthetic Aperture Radar (SAR images, we propose to use the Dempster-Shafer (D-S evidence theory to fuse coherent/incoherent features from sensors that form an integral part of the system. First, we use the Simple Linear Iterative Clustering (SLIC segmentation algorithm to implement multi-scale joint segmentation for multi-temporal SAR images. Second, we extract multiple intensity and coherence difference features on each segment level by SLIC using mean operator to complete the fusion of multi-scale features to get the multi-feature difference mapped by a ratio operator. Finally, we fuse the multi-feature difference maps to get the final change detection result using the D-S evidence theory. The experimental results in our study prove the effectiveness of our proposed computational algorithm.

  4. Image mining for investigative pathology using optimized feature extraction and data fusion.

    Science.gov (United States)

    Chen, Wenjin; Meer, Peter; Georgescu, Bogdan; He, Wei; Goodell, Lauri A; Foran, David J

    2005-07-01

    In many subspecialties of pathology, the intrinsic complexity of rendering accurate diagnostic decisions is compounded by a lack of definitive criteria for detecting and characterizing diseases and their corresponding histological features. In some cases, there exists a striking disparity between the diagnoses rendered by recognized authorities and those provided by non-experts. We previously reported the development of an Image Guided Decision Support (IGDS) system, which was shown to reliably discriminate among malignant lymphomas and leukemia that are sometimes confused with one another during routine microscopic evaluation. As an extension of those efforts, we report here a web-based intelligent archiving subsystem that can automatically detect, image, and index new cells into distributed ground-truth databases. Systematic experiments showed that through the use of robust texture descriptors and density estimation based fusion the reliability and performance of the governing classifications of the system were improved significantly while simultaneously reducing the dimensionality of the feature space.

  5. Tools for Predicting Optical Damage on Inertial Confinement Fusion-Class Laser Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nostrand, M C; Carr, C W; Liao, Z M; Honig, J; Spaeth, M L; Manes, K R; Johnson, M A; Adams, J J; Cross, D A; Negres, R A; Widmayer, C C; Williams, W H; Matthews, M J; Jancaitis, K S; Kegelmeyer, L M

    2010-12-20

    Operating a fusion-class laser to its full potential requires a balance of operating constraints. On the one hand, the total laser energy delivered must be high enough to give an acceptable probability for ignition success. On the other hand, the laser-induced optical damage levels must be low enough to be acceptably handled with the available infrastructure and budget for optics recycle. Our research goal was to develop the models, database structures, and algorithmic tools (which we collectively refer to as ''Loop Tools'') needed to successfully maintain this balance. Predictive models are needed to plan for and manage the impact of shot campaigns from proposal, to shot, and beyond, covering a time span of years. The cost of a proposed shot campaign must be determined from these models, and governance boards must decide, based on predictions, whether to incorporate a given campaign into the facility shot plan based upon available resources. Predictive models are often built on damage ''rules'' derived from small beam damage tests on small optics. These off-line studies vary the energy, pulse-shape and wavelength in order to understand how these variables influence the initiation of damage sites and how initiated damage sites can grow upon further exposure to UV light. It is essential to test these damage ''rules'' on full-scale optics exposed to the complex conditions of an integrated ICF-class laser system. Furthermore, monitoring damage of optics on an ICF-class laser system can help refine damage rules and aid in the development of new rules. Finally, we need to develop the algorithms and data base management tools for implementing these rules in the Loop Tools. The following highlights progress in the development of the loop tools and their implementation.

  6. Fusion of contrast-enhanced breast MR and mammographic imaging data.

    Science.gov (United States)

    Behrenbruch, Christian P; Marias, Kostas; Armitage, Paul A; Yam, Margaret; Moore, Niall; English, Ruth E; Clarke, Jane; Brady, Michael

    2003-09-01

    Increasing use is being made of Gd-DTPA contrast-enhanced magnetic resonance imaging for breast cancer assessment since it provides 3D functional information via pharmacokinetic interaction between contrast agent and tumour vascularity, and because it is applicable to women of all ages as well as patients with post-operative scarring. Contrast-enhanced MRI (CE-MRI) is complementary to conventional X-ray mammography, since it is a relatively low-resolution functional counterpart of a comparatively high-resolution 2D structural representation. However, despite the additional information provided by MRI, mammography is still an extremely important diagnostic imaging modality, particularly for several common conditions such as ductal carcinoma in situ (DCIS) where it has been shown that there is a strong correlation between microcalcification clusters and malignancy. Pathological indicators such as calcifications and fine spiculations are not visible in CE-MRI and therefore there is clinical and diagnostic value in fusing the high-resolution structural information available from mammography with the functional data acquired from MRI imaging. This paper presents a novel data fusion technique whereby medial-lateral oblique (MLO) and cranial-caudal (CC) mammograms (2D data) are registered to 3D contrast-enhanced MRI volumes. We utilise a combination of pharmacokinetic modelling, projection geometry, wavelet-based landmark detection and thin-plate spline non-rigid 'warping' to transform the coordinates of regions of interest (ROIs) from the 2D mammograms to the spatial reference frame of the contrast-enhanced MRI volume. Of key importance is the use of a flexible wavelet-based feature extraction technique that enables feature correspondences to be robustly determined between the very different image characteristics of X-ray mammography and MRI. An evaluation of the fusion framework is demonstrated with a series of clinical cases and a total of 14 patient examples.

  7. Technique for infrared and visible image fusion based on non-subsampled shearlet transform and spiking cortical model

    Science.gov (United States)

    Kong, Weiwei; Wang, Binghe; Lei, Yang

    2015-07-01

    Fusion of infrared and visible images is an active research area in image processing, and a variety of relevant algorithms have been developed. However, the existing techniques commonly cannot gain good fusion performance and acceptable computational complexity simultaneously. This paper proposes a novel image fusion approach that integrates the non-subsampled shearlet transform (NSST) with spiking cortical model (SCM) to overcome the above drawbacks. On the one hand, using NSST to conduct the decompositions and reconstruction not only consists with human vision characteristics, but also effectively decreases the computational complexity compared with the current popular multi-resolution analysis tools such as non-subsampled contourlet transform (NSCT). On the other hand, SCM, which has been considered to be an optimal neuron network model recently, is responsible for the fusion of sub-images from different scales and directions. Experimental results indicate that the proposed method is promising, and it does significantly improve the fusion quality in both aspects of subjective visual performance and objective comparisons compared with other current popular ones.

  8. Accuracy of postoperative computed tomography and magnetic resonance image fusion for assessing deep brain stimulation electrodes.

    Science.gov (United States)

    Thani, Nova B; Bala, Arul; Swann, Gary B; Lind, Christopher R P

    2011-07-01

    Knowledge of the anatomic location of the deep brain stimulation (DBS) electrode in the brain is essential in quality control and judicious selection of stimulation parameters. Postoperative computed tomography (CT) imaging coregistered with preoperative magnetic resonance imaging (MRI) is commonly used to document the electrode location safely. The accuracy of this method, however, depends on many factors, including the quality of the source images, the area of signal artifact created by the DBS lead, and the fusion algorithm. To calculate the accuracy of determining the location of active contacts of the DBS electrode by coregistering postoperative CT image to intraoperative MRI. Intraoperative MRI with a surrogate marker (carbothane stylette) was digitally coregistered with postoperative CT with DBS electrodes in 8 consecutive patients. The location of the active contact of the DBS electrode was calculated in the stereotactic frame space, and the discrepancy between the 2 images was assessed. The carbothane stylette significantly reduces the signal void on the MRI to a mean diameter of 1.4 ± 0.1 mm. The discrepancy between the CT and MRI coregistration in assessing the active contact location of the DBS lead is 1.6 ± 0.2 mm, P Medtronic, Minneapolis, Minnesota) software. CT/MRI coregistration is an acceptable method of identifying the anatomic location of DBS electrode and active contacts.

  9. Synthetic aperture microwave imaging with active probing for fusion plasma diagnostics

    CERN Document Server

    Shevchenko, Vladimir F; Freethy, Simon J; Huang, Billy K

    2012-01-01

    A Synthetic Aperture Microwave Imaging (SAMI) system has been designed and built to obtain 2-D images at several frequencies from fusion plasmas. SAMI uses a phased array of linearly polarised antennas. The array configuration has been optimised to achieve maximum synthetic aperture beam efficiency. The signals received by antennas are down-converted to the intermediate frequency range and then recorded in a full vector form. Full vector signals allow beam focusing and image reconstruction in both real time and a post processing mode. SAMI can scan over 16 preprogrammed frequencies in the range of 10-35GHz with a switching time of 300ns. The system operates in 2 different modes simultaneously: both a passive imaging of plasma emission and also an active imaging of the back-scattered signal of the radiation launched by one of the antennas from the same array. This second mode is similar to so-called Doppler backscattering (DBS) reflectometry with 2-D resolution of the propagation velocity of turbulent structur...

  10. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Science.gov (United States)

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective qua