WorldWideScience

Sample records for image fusion predicts

  1. Radiomic biomarkers from PET/CT multi-modality fusion images for the prediction of immunotherapy response in advanced non-small cell lung cancer patients

    Science.gov (United States)

    Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James

    2018-02-01

    Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.

  2. Detecting Weather Radar Clutter by Information Fusion With Satellite Images and Numerical Weather Prediction Model Output

    DEFF Research Database (Denmark)

    Bøvith, Thomas; Nielsen, Allan Aasbjerg; Hansen, Lars Kai

    2006-01-01

    A method for detecting clutter in weather radar images by information fusion is presented. Radar data, satellite images, and output from a numerical weather prediction model are combined and the radar echoes are classified using supervised classification. The presented method uses indirect...... information on precipitation in the atmosphere from Meteosat-8 multispectral images and near-surface temperature estimates from the DMI-HIRLAM-S05 numerical weather prediction model. Alternatively, an operational nowcasting product called 'Precipitating Clouds' based on Meteosat-8 input is used. A scale...

  3. Spatio-Temporal Series Remote Sensing Image Prediction Based on Multi-Dictionary Bayesian Fusion

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-11-01

    Full Text Available Contradictions in spatial resolution and temporal coverage emerge from earth observation remote sensing images due to limitations in technology and cost. Therefore, how to combine remote sensing images with low spatial yet high temporal resolution as well as those with high spatial yet low temporal resolution to construct images with both high spatial resolution and high temporal coverage has become an important problem called spatio-temporal fusion problem in both research and practice. A Multi-Dictionary Bayesian Spatio-Temporal Reflectance Fusion Model (MDBFM has been proposed in this paper. First, multiple dictionaries from regions of different classes are trained. Second, a Bayesian framework is constructed to solve the dictionary selection problem. A pixel-dictionary likehood function and a dictionary-dictionary prior function are constructed under the Bayesian framework. Third, remote sensing images before and after the middle moment are combined to predict images at the middle moment. Diverse shapes and textures information is learned from different landscapes in multi-dictionary learning to help dictionaries capture the distinctions between regions. The Bayesian framework makes full use of the priori information while the input image is classified. The experiments with one simulated dataset and two satellite datasets validate that the MDBFM is highly effective in both subjective and objective evaluation indexes. The results of MDBFM show more precise details and have a higher similarity with real images when dealing with both type changes and phenology changes.

  4. Investigations of image fusion

    Science.gov (United States)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D

  5. Remote sensing image fusion

    CERN Document Server

    Alparone, Luciano; Baronti, Stefano; Garzelli, Andrea

    2015-01-01

    A synthesis of more than ten years of experience, Remote Sensing Image Fusion covers methods specifically designed for remote sensing imagery. The authors supply a comprehensive classification system and rigorous mathematical description of advanced and state-of-the-art methods for pansharpening of multispectral images, fusion of hyperspectral and panchromatic images, and fusion of data from heterogeneous sensors such as optical and synthetic aperture radar (SAR) images and integration of thermal and visible/near-infrared images. They also explore new trends of signal/image processing, such as

  6. Radar image and data fusion for natural hazards characterisation

    Science.gov (United States)

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong

    2010-01-01

    Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.

  7. [Image fusion in medical radiology].

    Science.gov (United States)

    Burger, C

    1996-07-20

    Image fusion supports the correlation between images of two or more studies of the same organ. First, the effect of differing geometries during image acquisitions, such as a head tilt, is compensated for. As a consequence, congruent images can easily be obtained. Instead of merely putting them side by side in a static manner and burdening the radiologist with the whole correlation task, image fusion supports him with interactive visualization techniques. This is especially worthwhile for small lesions as they can be more precisely located. Image fusion is feasible today. Easy and robust techniques are readily available, and furthermore DICOM, a rapidly evolving data exchange standard, diminishes the once severe compatibility problems for image data originating from systems of different manufacturers. However, the current solutions for image fusion are not yet established enough for a high throughput of fusion studies. Thus, for the time being image fusion is most appropriately confined to clinical research studies.

  8. Fusion Imaging for Procedural Guidance.

    Science.gov (United States)

    Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J

    2018-05-01

    The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  9. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    Science.gov (United States)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  10. Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.

    Science.gov (United States)

    Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni

    2018-03-31

    Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.

  11. Prediction of the microsurgical window for skull-base tumors by advanced three-dimensional multi-fusion volumetric imaging

    International Nuclear Information System (INIS)

    Oishi, Makoto; Fukuda, Masafumi; Saito, Akihiko; Hiraishi, Tetsuya; Fujii, Yukihiko; Ishida, Go

    2011-01-01

    The surgery of skull base tumors (SBTs) is difficult due to the complex and narrow surgical window that is restricted by the cranium and important structures. The utility of three-dimensional multi-fusion volumetric imaging (3-D MFVI) for visualizing the predicted window for SBTs was evaluated. Presurgical simulation using 3-D MFVI was performed in 32 patients with SBTs. Imaging data were collected from computed tomography, magnetic resonance imaging, and digital subtraction angiography. Skull data was processed to imitate actual bone resection and integrated with various structures extracted from appropriate imaging modalities by image-analyzing software. The simulated views were compared with the views obtained during surgery. All craniotomies and bone resections except opening of the acoustic canal in 2 patients were performed as simulated. The simulated window allowed observation of the expected microsurgical anatomies including tumors, vasculatures, and cranial nerves, through the predicted operative window. We could not achieve the planned tumor removal in only 3 patients. 3-D MFVI afforded high quality images of the relevant microsurgical anatomies during the surgery of SBTs. The intraoperative deja-vu effect of the simulation increased the confidence of the surgeon in the planned surgical procedures. (author)

  12. Quantitative image fusion in infrared radiometry

    Science.gov (United States)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  13. Fusion of spectra and texture data of hyperspectral imaging for the prediction of the water-holding capacity of fresh chicken breast filets

    Science.gov (United States)

    This study investigated the fusion of spectra and texture data of hyperspectral imaging (HSI, 1000–2500 nm) for predicting the water-holding capacity (WHC) of intact, fresh chicken breast filets. Three physical and chemical indicators drip loss, expressible fluid, and salt-induced water gain were me...

  14. Multifocus Image Fusion in Q-Shift DTCWT Domain Using Various Fusion Rules

    Directory of Open Access Journals (Sweden)

    Yingzhong Tian

    2016-01-01

    Full Text Available Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT. Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS and the Sum Modified Laplacian (SML. Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.

  15. Color Multifocus Image Fusion Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    S. Savić

    2013-11-01

    Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.

  16. Image fusion for dynamic contrast enhanced magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Leach Martin O

    2004-10-01

    Full Text Available Abstract Background Multivariate imaging techniques such as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI have been shown to provide valuable information for medical diagnosis. Even though these techniques provide new information, integrating and evaluating the much wider range of information is a challenging task for the human observer. This task may be assisted with the use of image fusion algorithms. Methods In this paper, image fusion based on Kernel Principal Component Analysis (KPCA is proposed for the first time. It is demonstrated that a priori knowledge about the data domain can be easily incorporated into the parametrisation of the KPCA, leading to task-oriented visualisations of the multivariate data. The results of the fusion process are compared with those of the well-known and established standard linear Principal Component Analysis (PCA by means of temporal sequences of 3D MRI volumes from six patients who took part in a breast cancer screening study. Results The PCA and KPCA algorithms are able to integrate information from a sequence of MRI volumes into informative gray value or colour images. By incorporating a priori knowledge, the fusion process can be automated and optimised in order to visualise suspicious lesions with high contrast to normal tissue. Conclusion Our machine learning based image fusion approach maps the full signal space of a temporal DCE-MRI sequence to a single meaningful visualisation with good tissue/lesion contrast and thus supports the radiologist during manual image evaluation.

  17. Image fusion tool: Validation by phantom measurements

    International Nuclear Information System (INIS)

    Zander, A.; Geworski, L.; Richter, M.; Ivancevic, V.; Munz, D.L.; Muehler, M.; Ditt, H.

    2002-01-01

    Aim: Validation of a new image fusion tool with regard to handling, application in a clinical environment and fusion precision under different acquisition and registration settings. Methods: The image fusion tool investigated allows fusion of imaging modalities such as PET, CT, MRI. In order to investigate fusion precision, PET and MRI measurements were performed using a cylinder and a body contour-shaped phantom. The cylinder phantom (diameter and length 20 cm each) contained spheres (10 to 40 mm in diameter) which represented 'cold' or 'hot' lesions in PET measurements. The body contour-shaped phantom was equipped with a heart model containing two 'cold' lesions. Measurements were done with and without four external markers placed on the phantoms. The markers were made of plexiglass (2 cm diameter and 1 cm thickness) and contained a Ga-Ge-68 core for PET and Vitamin E for MRI measurements. Comparison of fusion results with and without markers was done visually and by computer assistance. This algorithm was applied to the different fusion parameters and phantoms. Results: Image fusion of PET and MRI data without external markers yielded a measured error of 0 resulting in a shift at the matrix border of 1.5 mm. Conclusion: The image fusion tool investigated allows a precise fusion of PET and MRI data with a translation error acceptable for clinical use. The error is further minimized by using external markers, especially in the case of missing anatomical orientation. Using PET the registration error depends almost only on the low resolution of the data

  18. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  19. Multi-sensor image fusion and its applications

    CERN Document Server

    Blum, Rick S

    2005-01-01

    Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,

  20. Joint Multi-Focus Fusion and Bayer ImageRestoration

    Institute of Scientific and Technical Information of China (English)

    Ling Guo; Bin Yang; Chao Yang

    2015-01-01

    In this paper, a joint multifocus image fusion and Bayer pattern image restoration algorithm for raw images of single-sensor colorimaging devices is proposed. Different from traditional fusion schemes, the raw Bayer pattern images are fused before colorrestoration. Therefore, the Bayer image restoration operation is only performed one time. Thus, the proposed algorithm is moreefficient than traditional fusion schemes. In detail, a clarity measurement of Bayer pattern image is defined for raw Bayer patternimages, and the fusion operator is performed on superpixels which provide powerful grouping cues of local image feature. Theraw images are merged with refined weight map to get the fused Bayer pattern image, which is restored by the demosaicingalgorithm to get the full resolution color image. Experimental results demonstrate that the proposed algorithm can obtain betterfused results with more natural appearance and fewer artifacts than the traditional algorithms.

  1. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    Directory of Open Access Journals (Sweden)

    Jesse S. Jin

    2010-10-01

    Full Text Available Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  2. Multimodality Image Fusion and Planning and Dose Delivery for Radiation Therapy

    International Nuclear Information System (INIS)

    Saw, Cheng B.; Chen Hungcheng; Beatty, Ron E.; Wagner, Henry

    2008-01-01

    Image-guided radiation therapy (IGRT) relies on the quality of fused images to yield accurate and reproducible patient setup prior to dose delivery. The registration of 2 image datasets can be characterized as hardware-based or software-based image fusion. Hardware-based image fusion is performed by hybrid scanners that combine 2 distinct medical imaging modalities such as positron emission tomography (PET) and computed tomography (CT) into a single device. In hybrid scanners, the patient maintains the same position during both studies making the fusion of image data sets simple. However, it cannot perform temporal image registration where image datasets are acquired at different times. On the other hand, software-based image fusion technique can merge image datasets taken at different times or with different medical imaging modalities. Software-based image fusion can be performed either manually, using landmarks, or automatically. In the automatic image fusion method, the best fit is evaluated using mutual information coefficient. Manual image fusion is typically performed at dose planning and for patient setup prior to dose delivery for IGRT. The fusion of orthogonal live radiographic images taken prior to dose delivery to digitally reconstructed radiographs will be presented. Although manual image fusion has been routinely used, the use of fiducial markers has shortened the fusion time. Automated image fusion should be possible for IGRT because the image datasets are derived basically from the same imaging modality, resulting in further shortening the fusion time. The advantages and limitations of both hardware-based and software-based image fusion methodologies are discussed

  3. Clinical assessment of SPECT/CT co-registration image fusion

    International Nuclear Information System (INIS)

    Zhou Wen; Luan Zhaosheng; Peng Yong

    2004-01-01

    Objective: Study the methodology of the SPECT/CT co-registration image fusion, and Assessment the Clinical application value. Method: 172 patients who underwent SPECT/CT image fusion during 2001-2003 were studied, 119 men, 53 women. 51 patients underwent 18FDG image +CT, 26 patients underwent 99m Tc-RBC Liver pool image +CT, 43 patients underwent 99mTc-MDP Bone image +CT, 18 patients underwent 99m Tc-MAA Lung perfusion image +CT. The machine is Millium VG SPECT of GE Company. All patients have been taken three steps image: X-ray survey, X-ray transmission and nuclear emission image (Including planer imaging, SPECT or 18 F-FDG of dual head camera) without changing the position of the patients. We reconstruct the emission image with X-ray map and do reconstruction, 18FDG with COSEM and 99mTc with OSEM. Then combine the transmission image and the reconstructed emission image. We use different process parameters in deferent image methods. The accurate rate of SPECT/CT image fusion were statistics, and compare their accurate with that of single nuclear emission image. Results: The nuclear image which have been reconstructed by X-ray attenuation and OSEM are apparent better than pre-reconstructed. The post-reconstructed emission images have no scatter lines around the organs. The outline between different issues is more clear than before. The validity of All post-reconstructed images is better than pre-reconstructed. SPECT/CT image fusion make localization have worthy bases. 138 patients, the accuracy of SPECT/CT image fusion is 91.3% (126/138), whereas 60(88.2%) were found through SPECT/CT image fusion, There are significant difference between them(P 99m Tc- RBC-SPECT +CT image fusion, but 21 of them were inspected by emission image. In BONE 99m Tc -MDP-SPECT +CT image fusion, 4 patients' removed bone(1-6 months after surgery) and their relay with normal bone had activity, their morphologic and density in CT were different from normal bones. 11 of 20 patients who could

  4. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....

  5. Fusion of multiscale wavelet-based fractal analysis on retina image for stroke prediction.

    Science.gov (United States)

    Che Azemin, M Z; Kumar, Dinesh K; Wong, T Y; Wang, J J; Kawasaki, R; Mitchell, P; Arjunan, Sridhar P

    2010-01-01

    In this paper, we present a novel method of analyzing retinal vasculature using Fourier Fractal Dimension to extract the complexity of the retinal vasculature enhanced at different wavelet scales. Logistic regression was used as a fusion method to model the classifier for 5-year stroke prediction. The efficacy of this technique has been tested using standard pattern recognition performance evaluation, Receivers Operating Characteristics (ROC) analysis and medical prediction statistics, odds ratio. Stroke prediction model was developed using the proposed system.

  6. Assessment of fusion operators for medical imaging: application to MR images fusion

    International Nuclear Information System (INIS)

    Barra, V.; Boire, J.Y.

    2000-01-01

    We propose in the article to assess the results provided by several fusion operators in the case of T 1 - and T 2 -weighted magnetic resonance images fusion of the brain. This assessment deals with an expert visual inspection of the results and with a numerical analysis of some comparison measures found in the literature. The aim of this assessment is to find the 'best' operator according to the clinical study. This method is here applied to the quantification of brain tissue volumes on a brain phantom, and allows to select a fusion operator in any clinical study where several information is available. (authors)

  7. Visible and NIR image fusion using weight-map-guided Laplacian ...

    Indian Academy of Sciences (India)

    Ashish V Vanmali

    fusion perspective, instead of the conventional haze imaging model. The proposed ... Image dehazing; Laplacian–Gaussian pyramid; multi-resolution fusion; visible–NIR image fusion; weight map. 1. .... Tan's [8] work is based on two assumptions: first, images ... responding colour image, since NIR can penetrate through.

  8. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  9. Image fusion via nonlocal sparse K-SVD dictionary learning.

    Science.gov (United States)

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  10. Image fusion in x-ray differential phase-contrast imaging

    Science.gov (United States)

    Haas, W.; Polyanskaya, M.; Bayer, F.; Gödel, K.; Hofmann, H.; Rieger, J.; Ritter, A.; Weber, T.; Wucherer, L.; Durst, J.; Michel, T.; Anton, G.; Hornegger, J.

    2012-02-01

    Phase-contrast imaging is a novel modality in the field of medical X-ray imaging. The pioneer method is the grating-based interferometry which has no special requirements to the X-ray source and object size. Furthermore, it provides three different types of information of an investigated object simultaneously - absorption, differential phase-contrast and dark-field images. Differential phase-contrast and dark-field images represent a completely new information which has not yet been investigated and studied in context of medical imaging. In order to introduce phase-contrast imaging as a new modality into medical environment the resulting information about the object has to be correctly interpreted. The three output images reflect different properties of the same object the main challenge is to combine and visualize these data in such a way that it diminish the information explosion and reduce the complexity of its interpretation. This paper presents an intuitive image fusion approach which allows to operate with grating-based phase-contrast images. It combines information of the three different images and provides a single image. The approach is implemented in a fusion framework which is aimed to support physicians in study and analysis. The framework provides the user with an intuitive graphical user interface allowing to control the fusion process. The example given in this work shows the functionality of the proposed method and the great potential of phase-contrast imaging in medical practice.

  11. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    Science.gov (United States)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  12. A framework of region-based dynamic image fusion

    Institute of Scientific and Technical Information of China (English)

    WANG Zhong-hua; QIN Zheng; LIU Yu

    2007-01-01

    A new framework of region-based dynamic image fusion is proposed. First, the technique of target detection is applied to dynamic images (image sequences) to segment images into different targets and background regions. Then different fusion rules are employed in different regions so that the target information is preserved as much as possible. In addition, steerable non-separable wavelet frame transform is used in the process of multi-resolution analysis, so the system achieves favorable characters of orientation and invariant shift. Compared with other image fusion methods, experimental results showed that the proposed method has better capabilities of target recognition and preserves clear background information.

  13. Anato-metabolic fusion of PET, CT and MRI images

    International Nuclear Information System (INIS)

    Przetak, C.; Baum, R.P.; Niesen, A.; Slomka, P.; Proeschild, A.; Leonhardi, J.

    2000-01-01

    The fusion of cross-sectional images - especially in oncology - appears to be a very helpful tool to improve the diagnostic and therapeutic accuracy. Though many advantages exist, image fusion is applied routinely only in a few hospitals. To introduce image fusion as a common procedure, technical and logistical conditions have to be fulfilled which are related to long term archiving of digital data, data transfer and improvement of the available software in terms of usefulness and documentation. The accuracy of coregistration and the quality of image fusion has to be validated by further controlled studies. (orig.) [de

  14. Fusion of colour and monochromatic images with edge emphasis

    Directory of Open Access Journals (Sweden)

    Rade M. Pavlović

    2014-02-01

    Full Text Available We propose a novel method to fuse true colour images with monochromatic non-visible range images that seeks to encode important structural information from monochromatic images efficiently but also preserve the natural appearance of the available true chromacity information. We utilise the β colour opponency channel of the lαβ colour as the domain to fuse information from the monochromatic input into the colour input by the way of robust grayscale fusion. This is followed by an effective gradient structure visualisation step that enhances the visibility of monochromatic information in the final colour fused image. Images fused using this method preserve their natural appearance and chromacity better than conventional methods while at the same time clearly encode structural information from the monochormatic input. This is demonstrated on a number of well-known true colour fusion examples and confirmed by the results of subjective trials on the data from several colour fusion scenarios. Introduction The goal of image fusion can be broadly defined as: the representation of visual information contained in a number of input images into a single fused image without distortion or loss of information. In practice, however, a representation of all available information from multiple inputs in a single image is almost impossible and fusion is generally a data reduction task.  One of the sensors usually provides a true colour image that by definition has all of its data dimensions already populated by the spatial and chromatic information. Fusing such images with information from monochromatic inputs in a conventional manner can severely affect natural appearance of the fused image. This is a difficult problem and partly the reason why colour fusion received only a fraction of the attention than better behaved grayscale fusion even long after colour sensors became widespread. Fusion method Humans tend to see colours as contrasts between opponent

  15. The optimal algorithm for Multi-source RS image fusion.

    Science.gov (United States)

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  16. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    Science.gov (United States)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat

  17. Spatiotemporal Fusion of Remote Sensing Images with Structural Sparsity and Semi-Coupled Dictionary Learning

    Directory of Open Access Journals (Sweden)

    Jingbo Wei

    2016-12-01

    Full Text Available Fusion of remote sensing images with different spatial and temporal resolutions is highly needed by diverse earth observation applications. A small number of spatiotemporal fusion methods using sparse representation appear to be more promising than traditional linear mixture methods in reflecting abruptly changing terrestrial content. However, one of the main difficulties is that the results of sparse representation have reduced expressional accuracy; this is due in part to insufficient prior knowledge. For remote sensing images, the cluster and joint structural sparsity of the sparse coefficients could be employed as a priori knowledge. In this paper, a new optimization model is constructed with the semi-coupled dictionary learning and structural sparsity to predict the unknown high-resolution image from known images. Specifically, the intra-block correlation and cluster-structured sparsity are considered for single-channel reconstruction, and the inter-band similarity of joint-structured sparsity is considered for multichannel reconstruction, and both are implemented with block sparse Bayesian learning. The detailed optimization steps are given iteratively. In the experimental procedure, the red, green, and near-infrared bands of Landsat-7 and Moderate Resolution Imaging Spectrometer (MODIS satellites are put to fusion with root mean square errors to check the prediction accuracy. It can be concluded from the experiment that the proposed methods can produce higher quality than state-of-the-art methods.

  18. Remote sensing image fusion in the context of Digital Earth

    International Nuclear Information System (INIS)

    Pohl, C

    2014-01-01

    The increase in the number of operational Earth observation satellites gives remote sensing image fusion a new boost. As a powerful tool to integrate images from different sensors it enables multi-scale, multi-temporal and multi-source information extraction. Image fusion aims at providing results that cannot be obtained from a single data source alone. Instead it enables feature and information mining of higher reliability and availability. The process required to prepare remote sensing images for image fusion comprises most of the necessary steps to feed the database of Digital Earth. The virtual representation of the planet uses data and information that is referenced and corrected to suit interpretation and decision-making. The same pre-requisite is valid for image fusion, the outcome of which can directly flow into a geographical information system. The assessment and description of the quality of the results remains critical. Depending on the application and information to be extracted from multi-source images different approaches are necessary. This paper describes the process of image fusion based on a fusion and classification experiment, explains the necessary quality measures involved and shows with this example which criteria have to be considered if the results of image fusion are going to be used in Digital Earth

  19. Biometric image enhancement using decision rule based image fusion techniques

    Science.gov (United States)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  20. Extended depth of field integral imaging using multi-focus fusion

    Science.gov (United States)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  1. An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework

    Directory of Open Access Journals (Sweden)

    Guanqiu Qi

    2017-10-01

    Full Text Available Image fusion is widely used in different areas and can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. Medical image fusion, as an important image fusion application, can extract the details of multiple images from different imaging modalities and combine them into an image that contains complete and non-redundant information for increasing the accuracy of medical diagnosis and assessment. The quality of the fused image directly affects medical diagnosis and assessment. However, existing solutions have some drawbacks in contrast, sharpness, brightness, blur and details. This paper proposes an integrated dictionary-learning and entropy-based medical image-fusion framework that consists of three steps. First, the input image information is decomposed into low-frequency and high-frequency components by using a Gaussian filter. Second, low-frequency components are fused by weighted average algorithm and high-frequency components are fused by the dictionary-learning based algorithm. In the dictionary-learning process of high-frequency components, an entropy-based algorithm is used for informative blocks selection. Third, the fused low-frequency and high-frequency components are combined to obtain the final fusion results. The results and analyses of comparative experiments demonstrate that the proposed medical image fusion framework has better performance than existing solutions.

  2. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  3. Image fusion techniques in permanent seed implantation

    Directory of Open Access Journals (Sweden)

    Alfredo Polo

    2010-10-01

    Full Text Available Over the last twenty years major software and hardware developments in brachytherapy treatment planning, intraoperative navigation and dose delivery have been made. Image-guided brachytherapy has emerged as the ultimate conformal radiation therapy, allowing precise dose deposition on small volumes under direct image visualization. In thisprocess imaging plays a central role and novel imaging techniques are being developed (PET, MRI-MRS and power Doppler US imaging are among them, creating a new paradigm (dose-guided brachytherapy, where imaging is used to map the exact coordinates of the tumour cells, and to guide applicator insertion to the correct position. Each of these modalities has limitations providing all of the physical and geometric information required for the brachytherapy workflow.Therefore, image fusion can be used as a solution in order to take full advantage of the information from each modality in treatment planning, intraoperative navigation, dose delivery, verification and follow-up of interstitial irradiation.Image fusion, understood as the visualization of any morphological volume (i.e. US, CT, MRI together with an additional second morpholo gical volume (i.e. CT, MRI or functional dataset (functional MRI, SPECT, PET, is a well known method for treatment planning, verification and follow-up of interstitial irradiation. The term image fusion is used when multiple patient image datasets are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality taken at different moments (multi-temporalapproach, or by combining information from multiple modalities. Quality means that the fused images should provide additional information to the brachythe rapy process (diagnosis and staging, treatment planning, intraoperative imaging, treatment delivery and follow-up that cannot be obtained in other ways. In this review I will focus on the role of

  4. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    Science.gov (United States)

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  5. [Research progress of multi-model medical image fusion and recognition].

    Science.gov (United States)

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  6. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    Science.gov (United States)

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Pulmonary function-morphologic relationships assessed by SPECT-CT fusion images

    International Nuclear Information System (INIS)

    Suga, Kazuyoshi

    2012-01-01

    Pulmonary single photon emission computed tomography-computed tomography (SPECT-CT) fusion images provide objective and comprehensive assessment of pulmonary function and morphology relationships at cross-sectional lungs. This article reviewed the noteworthy findings of lung pathophysiology in wide-spectral lung disorders, which have been revealed on SPECT-CT fusion images in 8 years of experience. The fusion images confirmed the fundamental pathophysiologic appearance of lung low CT attenuation caused by airway obstruction-induced hypoxic vasoconstriction and that caused by direct pulmonary arterial obstruction as in acute pulmonary thromboembolism (PTE). The fusion images showed better correlation of lung perfusion distribution with lung CT attenuation changes at lung mosaic CT attenuation (MCA) compared with regional ventilation in the wide-spectral lung disorders, indicating that lung heterogeneous perfusion distribution may be a dominant mechanism of MCA on CT. SPECT-CT angiography fusion images revealed occasional dissociation between lung perfusion defects and intravascular clots in acute PTE, indicating the importance of assessment of actual effect of intravascular colts on peripheral lung perfusion. Perfusion SPECT-CT fusion images revealed the characteristic and preferential location of pulmonary infarction in acute PTE. The fusion images showed occasional unexpected perfusion defects in normal lung areas on CT in chronic obstructive pulmonary diseases and interstitial lung diseases, indicating the ability of perfusion SPECT superior to CT for detection of mild lesions in these disorders. The fusion images showed frequent ''steal phenomenon''-induced perfusion defects extending to the surrounding normal lung of arteriovenous fistulas and those at normal lungs on CT in hepatopulmonary syndrome. Comprehensive assessment of lung function-CT morphology on fusion images will lead to more profound understanding of lung pathophysiology in wide-spectral lung

  8. Three-dimensional imaging of lumbar spinal fusions

    International Nuclear Information System (INIS)

    Chafetz, N.; Hunter, J.C.; Cann, C.E.; Morris, J.M.; Ax, L.; Catterling, K.F.

    1986-01-01

    Using a Cemax 1000 three-dimensional (3D) imaging computer/workstation, the author evaluated 15 patients with lumbar spinal fusions (four with pseudarthrosis). Both axial images with sagittal and coronal reformations and 3D images were obtained. The diagnoses (spinal stenosis and psuedarthrosis) were changed in four patients, confirmed in six patients, and unchanged in five patients with the addition of the 3D images. The ''cut-away'' 3D images proved particularly helpful for evaluation of central and lateral spinal stenosis, whereas the ''external'' 3D images were most useful for evaluation of the integrity of the fusion. Additionally, orthopedic surgeons found 3D images superior for both surgical planning and explaining pathology to patients

  9. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    Directory of Open Access Journals (Sweden)

    Zhiqin Zhu

    2017-02-01

    Full Text Available In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different groups based on geometric similarities. The key information of each image-patch group is extracted by principle component analysis (PCA to build dictionary. According to the constructed dictionary, image patches are converted to sparse coefficients by simultaneous orthogonal matching pursuit (SOMP algorithm for representing the source multi-focus images. At last the sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Due to the limitation of microscope, the fluorescence image cannot be fully focused. The proposed multi-focus image fusion solution is applied to fluorescence imaging area for generating all-in-focus images. The comparison experimentation results confirm the feasibility and effectiveness of the proposed multi-focus image fusion solution.

  10. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  11. Alternate method for to realize image fusion

    International Nuclear Information System (INIS)

    Vargas, L.; Hernandez, F.; Fernandez, R.

    2005-01-01

    At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)

  12. Added Value of 3D Cardiac SPECT/CTA Fusion Imaging in Patients with Reversible Perfusion Defect on Myocardial Perfusion SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Eun Jung; Cho, Ihn Ho [Yeungnam University Hospital, Daegu (Korea, Republic of); Kang, Won Jun [Yonsei University Hospital, Seoul (Korea, Republic of); Kim, Seong Min [Chungnam National University Medical School and Hospital, Daejeon (Korea, Republic of); Won, Kyoung Sook [Keomyung University Dongsan Hospital, Daegu (Korea, Republic of); Lim, Seok Tae [Chonbuk National University Medical School and Hospital, Jeonju (Korea, Republic of); Hwang, Kyung Hoon [Gachon University Gil Hospital, Incheon (Korea, Republic of); Lee, Byeong Il; Bom, Hee Seung [Chonnam National University Medical School and Hospital, Gwangju (Korea, Republic of)

    2009-12-15

    Integration of the functional information of myocardial perfusion SPECT (MPS) and the morphoanatomical information of coronary CT angiography (CTA) may provide useful additional diagnostic information of the spatial relationship between perfusion defects and coronary stenosis. We studied to know the added value of three dimensional cardiac SPECT/CTA fusion imaging (fusion image) by comparing between fusion image and MPS. Forty-eight patients (M:F=26:22, Age: 63.3{+-}10.4 years) with a reversible perfusion defect on MPS (adenosine stress/rest SPECT with Tc-99m sestamibi or tetrofosmin) and CTA were included. Fusion images were molded and compared with the findings from the MPS. Invasive coronary angiography served as a reference standard for fusion image and MPS. Total 144 coronary arteries in 48 patients were analyzed; Fusion image yielded the sensitivity, specificity, negative and positive predictive value for the detection of hemodynamically significant stenosis per coronary artery 82.5%, 79.3%, 76.7% and 84.6%, respectively. Respective values for the MPS were 68.8%, 70.7%, 62.1% and 76.4%. And fusion image also could detect more multi-vessel disease. Fused three dimensional volume-rendered SPECT/CTA imaging provides intuitive convincing information about hemodynamic relevant lesion and could improved diagnostic accuracy.

  13. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  14. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    OpenAIRE

    Zhiqin Zhu; Guanqiu Qi; Yi Chai; Penghua Li

    2017-01-01

    In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different ...

  15. Spectrally Consistent Satellite Image Fusion with Improved Image Priors

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.

    2006-01-01

    Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....

  16. Multi-focus image fusion with the all convolutional neural network

    Science.gov (United States)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  17. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  18. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  19. Feature-Fusion Guidelines for Image-Based Multi-Modal Biometric Fusion

    Directory of Open Access Journals (Sweden)

    Dane Brown

    2017-07-01

    Full Text Available The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.

  20. Multispectral analytical image fusion

    International Nuclear Information System (INIS)

    Stubbings, T.C.

    2000-04-01

    With new and advanced analytical imaging methods emerging, the limits of physical analysis capabilities and furthermore of data acquisition quantities are constantly pushed, claiming high demands to the field of scientific data processing and visualisation. Physical analysis methods like Secondary Ion Mass Spectrometry (SIMS) or Auger Electron Spectroscopy (AES) and others are capable of delivering high-resolution multispectral two-dimensional and three-dimensional image data; usually this multispectral data is available in form of n separate image files with each showing one element or other singular aspect of the sample. There is high need for digital image processing methods enabling the analytical scientist, confronted with such amounts of data routinely, to get rapid insight into the composition of the sample examined, to filter the relevant data and to integrate the information of numerous separate multispectral images to get the complete picture. Sophisticated image processing methods like classification and fusion provide possible solution approaches to this challenge. Classification is a treatment by multivariate statistical means in order to extract analytical information. Image fusion on the other hand denotes a process where images obtained from various sensors or at different moments of time are combined together to provide a more complete picture of a scene or object under investigation. Both techniques are important for the task of information extraction and integration and often one technique depends on the other. Therefore overall aim of this thesis is to evaluate the possibilities of both techniques regarding the task of analytical image processing and to find solutions for the integration and condensation of multispectral analytical image data in order to facilitate the interpretation of the enormous amounts of data routinely acquired by modern physical analysis instruments. (author)

  1. Fusion of infrared and visible images based on BEMD and NSDFB

    Science.gov (United States)

    Zhu, Pan; Huang, Zhanhua; Lei, Hai

    2016-07-01

    This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.

  2. Imaging fusion (SPECT/CT) in degenerative disease of spine

    International Nuclear Information System (INIS)

    Bernal, P.; Ucros, G.; Bermudez, S.; Ocampo, M.

    2007-01-01

    Full text: Objective: To determine the utility of Fusion Imaging SPECT/CT in degenerative pathology of the spine and to establish the impact of the use of fusion imaging in spinal pain due to degenerative changes of the spine. Materials and methods: 44 Patients (M=21, F=23) average age of 63 years and with degenerative pathology of spine were sent to Diagnosis Imaging department in FSFB. Bone scintigraphy (SPECT), CT of spine (cervical: 30%, Lumbar 70%) and fusion imaging were performed in all of them. Bone scintigraphy was carried out in a gamma camera Siemens Diacam double head attached to ESOFT computer. The images were acquired in matrix 128 x 128, 20 seg/imag, 64 images. CT of spine was performed same day or two days after in Helycoidal Siemens somatom emotion CT. The fusion was done in a Dicom workstation in sagital, axial and coronal reconstruction. The findings were evaluated by 2 Nuclear Medicine physicians and 2 radiologists of the staff of FSFB in an independent way. Results: Bone scan (SPECT) and CT of 44 patients were evaluated. CT showed facet joint osteoarthrities in 27 (61.3%) patients, uncovertebral joint arthrosis in 7 (15.9%), bulging disc in 9(20.4%), spinal nucleus lesion in 7(15.9%), osteophytes in 9 (20.4%), spinal foraminal stenosis in 7 (15.9%), spondylolysis/spondylolisthesis in 4 (9%). Bone scan showed facet joint osteoarthrities in 29 (65.9%), uncovertebral joint arthrosis in 4 (9%), osteophytes in 9 (20.4%) and normal 3 (6.8%). The imaging fusion showed coincidence findings (main lesion in CT with high uptake in scintigraphy) in 34 patients (77.2%) and no coincidence in 10 (22.8%). In 15 (34.09%) patients the fusion provided additional information. The analysis of the findings of CT and SPECT showed similar results in most of the cases and the fusion didn't provide additional information but it allowed to confirm the findings but when the findings didn't match where the CT showed several findings and SPECT only one area with high uptake

  3. Image fusion using MIM software via picture archiving and communication system

    International Nuclear Information System (INIS)

    Gu Zhaoxiang; Jiang Maosong

    2001-01-01

    The preliminary studies of the multimodality image registration and fusion were performed using an image fusion software and a picture archiving and communication system (PACS) to explore the methodology. Original image voluminal data were acquired with a CT scanner, MR and dual-head coincidence SPECT, respectively. The data sets from all imaging devices were queried, retrieved, transferred and accessed via DICOM PACS. The image fusion was performed at the SPECT ICON work-station, where the MIM (Medical Image Merge) fusion software was installed. The images were created by re-slicing original volume on the fly. The image volumes were aligned by translation and rotation of these view ports with respect to the original volume orientation. The transparency factor and contrast were adjusted in order that both volumes can be visualized in the merged images. The image volume data of CT, MR and nuclear medicine were transferred, accessed and loaded via PACS successfully. The perfect fused images of chest CT/ 18 F-FDG and brain MR/SPECT were obtained. These results showed that image fusion technique using PACS was feasible and practical. Further experimentation and larger validation studies were needed to explore the full potential of the clinical use

  4. Alternate method for to realize image fusion; Metodo alterno para realizar fusion de imagenes

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, L; Hernandez, F; Fernandez, R [Departamento de Medicina Nuclear, Imagenologia Diagnostica. Centro Medico de Xalapa, Veracruz (Mexico)

    2005-07-01

    At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)

  5. Image enhancement using thermal-visible fusion for human detection

    Science.gov (United States)

    Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd

    2017-09-01

    An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.

  6. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  7. Evaluation of multimodality imaging using image fusion with ultrasound tissue elasticity imaging in an experimental animal model.

    Science.gov (United States)

    Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A

    2014-01-01

    To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a

  8. Neutron penumbral imaging of laser-fusion targets

    International Nuclear Information System (INIS)

    Lerche, R.A.; Ress, D.B.

    1988-01-01

    Using a new technique, penumbral coded-aperture imaging, the first neutron images of laser-driven, inertial-confinement fusion targets were obtained. With these images the deuterium-tritium burn region within a compressed target can be measured directly. 4 references, 11 figures

  9. T2*-weighted image/T2-weighted image fusion in postimplant dosimetry of prostate brachytherapy

    International Nuclear Information System (INIS)

    Katayama, Norihisa; Takemoto, Mitsuhiro; Yoshio, Kotaro

    2011-01-01

    Computed tomography (CT)/magnetic resonance imaging (MRI) fusion is considered to be the best method for postimplant dosimetry of permanent prostate brachytherapy; however, it is inconvenient and costly. In T2 * -weighted image (T2 * -WI), seeds can be easily detected without the use of an intravenous contrast material. We present a novel method for postimplant dosimetry using T2 * -WI/T2-weighted image (T2-WI) fusion. We compared the outcomes of T2 * -WI/T2-WI fusion-based and CT/T2-WI fusion-based postimplant dosimetry. Between April 2008 and July 2009, 50 consecutive prostate cancer patients underwent brachytherapy. All the patients were treated with 144 Gy of brachytherapy alone. Dose-volume histogram (DVH) parameters (prostate D90, prostate V100, prostate V150, urethral D10, and rectal D2cc) were prospectively compared between T2 * -WI/T2-WI fusion-based and CT/T2-WI fusion-based dosimetry. All the DVH parameters estimated by T2 * -WI/T2-WI fusion-based dosimetry strongly correlated to those estimated by CT/T2-WI fusion-based dosimetry (0.77≤ R ≤0.91). No significant difference was observed in these parameters between the two methods, except for prostate V150 (p=0.04). These results show that T2 * -WI/T2-WI fusion-based dosimetry is comparable or superior to MRI-based dosimetry as previously reported, because no intravenous contrast material is required. For some patients, rather large differences were observed in the value between the 2 methods. We thought these large differences were a result of seed miscounts in T2 * -WI and shifts in fusion. Improving the image quality of T2 * -WI and the image acquisition speed of T2 * -WI and T2-WI may decrease seed miscounts and fusion shifts. Therefore, in the future, T2 * -WI/T2-WI fusion may be more useful for postimplant dosimetry of prostate brachytherapy. (author)

  10. Research and Realization of Medical Image Fusion Based on Three-Dimensional Reconstruction

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new medical image fusion technique is presented. The method is based on three-dimensional reconstruction. After reconstruction, the three-dimensional volume data is normalized by three-dimensional coordinate conversion in the same way and intercepted through setting up cutting plane including anatomical structure, as a result two images in entire registration on space and geometry are obtained and the images are fused at last.Compared with traditional two-dimensional fusion technique, three-dimensional fusion technique can not only resolve the different problems existed in the two kinds of images, but also avoid the registration error of the two kinds of images when they have different scan and imaging parameter. The research proves this fusion technique is more exact and has no registration, so it is more adapt to arbitrary medical image fusion with different equipments.

  11. Medical images fusion for application in treatment planning systems in radiotherapy

    International Nuclear Information System (INIS)

    Ros, Renato Assenci

    2006-01-01

    Software for medical images fusion was developed for utilization in CAT3D radiotherapy and MNPS radiosurgery treatment planning systems. A mutual information maximization methodology was used to make the image registration of different modalities by measure of the statistical dependence between the voxels pairs. The alignment by references points makes an initial approximation to the non linear optimization process by downhill simplex method for estimation of the joint histogram. The coordinates transformation function use a trilinear interpolation and search for the global maximum value in a 6 dimensional space, with 3 degree of freedom for translation and 3 degree of freedom for rotation, by making use of the rigid body model. This method was evaluated with CT, MR and PET images from Vanderbilt University database to verify its accuracy by comparison of transformation coordinates of each images fusion with gold-standard values. The median of images alignment error values was 1.6 mm for CT-MR fusion and 3.5 mm for PET-MR fusion, with gold-standard accuracy estimated as 0.4 mm for CT-MR fusion and 1.7 mm for PET-MR fusion. The maximum error values were 5.3 mm for CT-MR fusion and 7.4 mm for PET-MR fusion, and 99.1% of alignment errors were images subvoxels values. The mean computing time was 24 s. The software was successfully finished and implemented in 59 radiotherapy routine services, of which 42 are in Brazil and 17 are in Latin America. This method does not have limitation about different resolutions from images, pixels sizes and slice thickness. Besides, the alignment may be accomplished by axial, coronal or sagittal images. (author)

  12. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-01-01

    Full Text Available Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT, the fast discrete curvelet transform (FDCT, and the dual tree complex wavelet transform (DTCWT based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.

  13. The establishment of the method of three dimension volumetric fusion of emission and transmission images for PET imaging

    International Nuclear Information System (INIS)

    Zhang Xiangsong; He Zuoxiang

    2004-01-01

    Objective: To establish the method of three dimension volumetric fusion of emission and transmission images for PET imaging. Methods: The volume data of emission and transmission images acquired with Siemens ECAT HR + PET scanner were transferred to PC computer by local area network. The PET volume data were converted into 8 bit byte type, and scaled to the range of 0-255. The data coordinates of emission and transmission images were normalized by three-dimensional coordinate conversion in the same way. The images were fused with the mode of alpha-blending. The accuracy of image fusion was confirmed by its clinical application in 13 cases. Results: The three dimension volumetric fusion of emission and transmission images clearly displayed the silhouette and anatomic configuration in chest, including chest wall, lung, heart, mediastinum, et al. Forty-eight lesions in chest in 13 cases were accurately located by the image fusion. Conclusions: The volume data of emission and transmission images acquired with Siemens ECAT HR + PET scanner have the same data coordinate. The three dimension fusion software can conveniently used for the three dimension volumetric fusion of emission and transmission images, and also can correctly locate the lesions in chest

  14. Effective Multifocus Image Fusion Based on HVS and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-01-01

    Full Text Available The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS and back propagation (BP neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.

  15. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  16. Analyzer-based imaging of spinal fusion in an animal model

    International Nuclear Information System (INIS)

    Kelly, M E; Beavis, R C; Allen, L A; Fiorella, David; Schueltke, E; Juurlink, B H; Chapman, L D; Zhong, Z

    2008-01-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs

  17. Analyzer-based imaging of spinal fusion in an animal model

    Science.gov (United States)

    Kelly, M. E.; Beavis, R. C.; Fiorella, David; Schültke, E.; Allen, L. A.; Juurlink, B. H.; Zhong, Z.; Chapman, L. D.

    2008-05-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs.

  18. Data and image fusion for geometrical cloud characterization

    Energy Technology Data Exchange (ETDEWEB)

    Thorne, L.R.; Buch, K.A.; Sun, Chen-Hui; Diegert, C.

    1997-04-01

    Clouds have a strong influence on the Earth`s climate and therefore on climate change. An important step in improving the accuracy of models that predict global climate change, general circulation models, is improving the parameterization of clouds and cloud-radiation interactions. Improvements in the next generation models will likely include the effect of cloud geometry on the cloud-radiation parameterizations. We have developed and report here methods for characterizing the geometrical features and three-dimensional properties of clouds that could be of significant value in developing these new parameterizations. We developed and report here a means of generating and imaging synthetic clouds which we used to test our characterization algorithms; a method for using Taylor`s hypotheses to infer spatial averages from temporal averages of cloud properties; a computer method for automatically classifying cloud types in an image; and a method for producing numerical three-dimensional renderings of cloud fields based on the fusion of ground-based and satellite images together with meteorological data.

  19. Fusion of Geophysical Images in the Study of Archaeological Sites

    Science.gov (United States)

    Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.

    2011-12-01

    This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image

  20. Research on fusion algorithm of polarization image in tetrolet domain

    Science.gov (United States)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  1. Polarimetric SAR Image Classification Using Multiple-feature Fusion and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Sun Xun

    2016-12-01

    Full Text Available In this paper, we propose a supervised classification algorithm for Polarimetric Synthetic Aperture Radar (PolSAR images using multiple-feature fusion and ensemble learning. First, we extract different polarimetric features, including extended polarimetric feature space, Hoekman, Huynen, H/alpha/A, and fourcomponent scattering features of PolSAR images. Next, we randomly select two types of features each time from all feature sets to guarantee the reliability and diversity of later ensembles and use a support vector machine as the basic classifier for predicting classification results. Finally, we concatenate all prediction probabilities of basic classifiers as the final feature representation and employ the random forest method to obtain final classification results. Experimental results at the pixel and region levels show the effectiveness of the proposed algorithm.

  2. External marker-based fusion of functional and morphological images

    International Nuclear Information System (INIS)

    Kremp, S.; Schaefer, A.; Alexander, C.; Kirsch, C.M.

    1999-01-01

    The fusion of image data resulting from methods oriented toward morphology like CT, MRI with functional information coming from nuclear medicine (SPECT, PET) is frequently applied to allow for a better association between functional findings and anatomical structures. A new software was developed to provide image fusion using PET, SPECT, MRI and CT data within a short processing periode for brain as well as whole body examinations in particular thorax and abdomen. The software utilizes external markers (brain) or anatomical landmarks (thorax) for correlation. The fusion requires a periode of approx. 15 min. The examples shown emphasize the high gain in diagnostic information by fusing image data of anatomical and functional methods. (orig.) [de

  3. Fusion of SPECT/TC images: Usefulness and benefits in degenerative spinal cord pathology

    International Nuclear Information System (INIS)

    Ocampo, Monica; Ucros, Gonzalo; Bermudez, Sonia; Morillo, Anibal; Rodriguez, Andres

    2005-01-01

    The objectives are to compare CT and SPECT bone scintigraphy evaluated independently with SPECT-CT fusion images in patients with known degenerative spinal pathology. To demonstrate the clinical usefulness of CT and SPECT fusion images. Materials and methods: Thirty-one patients with suspected degenerative spinal disease were evaluated with thin-slice, non-angled helical CT and bone scintigrams with single photon emission computed tomography (SPECT), both with multiplanar reconstructions within a 24-hour period After independent evaluation by a nuclear medicine specialist and a radiologist, multimodality image fusion software was used to merge the CT and SPECT studies and a final consensus interpretation of the combined images was obtained. Results: Thirty-two SPECT bone scintigraphy images, helical CT studies and SPECT-CT fusion images were obtained for 31 patients with degenerative spinal disease. The results of the bone scintigraphy and CT scans were in agreement in 17 pairs of studies (53.12%). In these studies image fusion did not provide additional information on the location or extension of the lesions. In 11 of the study pairs (34.2%), the information obtained was not in agreement between scintigraphy and CT studies: CT images demonstrated several abnormalities, whereas the SPECT images showed only one dominant lesion, or the SPECT images did not provide enough information for anatomical localization. In these cases image fusion helped establish the precise localization of the most clinically significant lesion, which matched the lesion with the greatest uptake. In 4 studies (12.5%) the CT and SPECT images were not in agreement: CT and SPECT images showed different information (normal scintigraphy, abnormal CT), thus leading to inconclusive fusion images. Conclusion: The use of CT-SPECT fusion images in degenerative spinal disease allows for the integration of anatomic detail with physiologic and functional information. CT-SPECT fusion improves the

  4. Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

    Institute of Scientific and Technical Information of China (English)

    Xia; JING; Yan; BAO

    2015-01-01

    Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.

  5. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    Directory of Open Access Journals (Sweden)

    Chenhui Qiu

    2017-01-01

    Full Text Available Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR- based approach. And the dynamic group sparsity recovery (DGSR algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  6. Fusion method of SAR and optical images for urban object extraction

    Science.gov (United States)

    Jia, Yonghong; Blum, Rick S.; Li, Fangfang

    2007-11-01

    A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.

  7. Fourier domain image fusion for differential X-ray phase-contrast breast imaging

    International Nuclear Information System (INIS)

    Coello, Eduardo; Sperl, Jonathan I.; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-01-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well.

  8. Fourier domain image fusion for differential X-ray phase-contrast breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Coello, Eduardo, E-mail: eduardo.coello@tum.de [GE Global Research, Garching (Germany); Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Institut für Informatik, Technische Universität München, Garching (Germany); Sperl, Jonathan I.; Bequé, Dirk [GE Global Research, Garching (Germany); Benz, Tobias [Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Institut für Informatik, Technische Universität München, Garching (Germany); Scherer, Kai; Herzen, Julia [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, Garching (Germany); Sztrókay-Gaul, Anikó; Hellerhoff, Karin [Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital, Munich (Germany); Pfeiffer, Franz [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, Garching (Germany); Cozzini, Cristina [GE Global Research, Garching (Germany); Grandl, Susanne [Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital, Munich (Germany)

    2017-04-15

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well.

  9. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    Science.gov (United States)

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  10. The impact of early shame memories in Binge Eating Disorder: The mediator effect of current body image shame and cognitive fusion.

    Science.gov (United States)

    Duarte, Cristiana; Pinto-Gouveia, José

    2017-12-01

    This study examined the phenomenology of shame experiences from childhood and adolescence in a sample of women with Binge Eating Disorder. Moreover, a path analysis was investigated testing whether the association between shame-related memories which are traumatic and central to identity, and binge eating symptoms' severity, is mediated by current external shame, body image shame and body image cognitive fusion. Participants in this study were 114 patients, who were assessed through the Eating Disorder Examination and the Shame Experiences Interview, and through self-report measures of external shame, body image shame, body image cognitive fusion and binge eating symptoms. Shame experiences where physical appearance was negatively commented or criticized by others were the most frequently recalled. A path analysis showed a good fit between the hypothesised mediational model and the data. The traumatic and centrality qualities of shame-related memories predicted current external shame, especially body image shame. Current shame feelings were associated with body image cognitive fusion, which, in turn, predicted levels of binge eating symptomatology. Findings support the relevance of addressing early shame-related memories and negative affective and self-evaluative experiences, namely related to body image, in the understanding and management of binge eating. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  12. [A preliminary research on multi-source medical image fusion].

    Science.gov (United States)

    Kang, Yuanyuan; Li, Bin; Tian, Lianfang; Mao, Zongyuan

    2009-04-01

    Multi-modal medical image fusion has important value in clinical diagnosis and treatment. In this paper, the multi-resolution analysis of Daubechies 9/7 Biorthogonal Wavelet Transform is introduced for anatomical and functional image fusion, then a new fusion algorithm with the combination of local standard deviation and energy as texture measurement is presented. At last, a set of quantitative evaluation criteria is given. Experiments show that both anatomical and metabolism information can be obtained effectively, and both the edge and texture features can be reserved successfully. The presented algorithm is more effective than the traditional algorithms.

  13. Development of technology for medical image fusion

    International Nuclear Information System (INIS)

    Yamaguchi, Takashi; Amano, Daizou

    2012-01-01

    With entry into a field of medical diagnosis in mind, we have developed positron emission tomography (PET) ''MIP-100'' system, of which spatial resolution is far higher than the conventional one, using semiconductor detectors for preclinical imaging for small animals. In response to the recently increasing market demand to fuse functional images by PET and anatomical ones by CT or MRI, we have been developing software to implement image fusion function that enhances marketability of the PET Camera. This paper describes the method of fusing with high accuracy the PET images and anatomical ones by CT system. It also explains that a computer simulation proved the image overlay accuracy to be ±0.3 mm as a result of the development, and that effectiveness of the developed software is confirmed in case of experiment to obtain measured data. Achieving such high accuracy as ±0.3 mm by the software allows us to present fusion images with high resolution (<0.6 mm) without degrading the spatial resolution (<0.5 mm) of the PET system using semiconductor detectors. (author)

  14. Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation

    OpenAIRE

    Pelapur, Rengarajan; Prasath, Surya; Palaniappan, Kannappan

    2014-01-01

    We are building a computerized image analysis system for Dura Mater vascular network from fluorescence microscopy images. We propose a system that couples a multi-focus image fusion module with a robust adaptive filtering based segmentation. The robust adaptive filtering scheme handles noise without destroying small structures, and the multi focal image fusion considerably improves the overall segmentation quality by integrating information from multiple images. Based on the segmenta...

  15. Adaptive polarization image fusion based on regional energy dynamic weighted average

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-qiang; PAN Quan; ZHANG Hong-cai

    2005-01-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  16. Modality prediction of biomedical literature images using multimodal feature representation

    Directory of Open Access Journals (Sweden)

    Pelka, Obioma

    2016-08-01

    Full Text Available This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.

  17. Remote Sensing Image Fusion Based on the Combination Grey Absolute Correlation Degree and IHS Transform

    Directory of Open Access Journals (Sweden)

    Hui LIN

    2014-12-01

    Full Text Available An improved fusion algorithm for multi-source remote sensing images with high spatial resolution and multi-spectral capacity is proposed based on traditional IHS fusion and grey correlation analysis. Firstly, grey absolute correlation degree is used to discriminate non-edge pixels and edge pixels in high-spatial resolution images, by which the weight of intensity component is identified in order to combine it with high-spatial resolution image. Therefore, image fusion is achieved using IHS inverse transform. The proposed method is applied to ETM+ multi-spectral images and panchromatic image, and Quickbird’s multi-spectral images and panchromatic image respectively. The experiments prove that the fusion method proposed in the paper can efficiently preserve spectral information of the original multi-spectral images while enhancing spatial resolution greatly. By comparison and analysis, the proposed fusion algorithm is better than traditional IHS fusion and fusion method based on grey correlation analysis and IHS transform.

  18. Research on Methods of Infrared and Color Image Fusion Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Zhao Rentao

    2014-06-01

    Full Text Available There is significant difference in the imaging features of infrared image and color image, but their fusion images also have very good complementary information. In this paper, based on the characteristics of infrared image and color image, first of all, wavelet transform is applied to the luminance component of the infrared image and color image. In multi resolution the relevant regional variance is regarded as the activity measure, relevant regional variance ratio as the matching measure, and the fusion image is enhanced in the process of integration, thus getting the fused images by final synthesis module and multi-resolution inverse transform. The experimental results show that the fusion image obtained by the method proposed in this paper is better than the other methods in keeping the useful information of the original infrared image and the color information of the original color image. In addition, the fusion image has stronger adaptability and better visual effect.

  19. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  20. Three dimensional image alignment, registration and fusion

    International Nuclear Information System (INIS)

    Treves, S.T.; Mitchell, K.D.; Habboush, I.H.

    1998-01-01

    Combined assessment of three dimensional anatomical and functional images (SPECT, PET, MRI, CT) is useful to determine the nature and extent of lesions in many parts of the body. Physicians principally rely on their spatial sense of mentally re-orient and overlap images obtained with different imaging modalities. Objective methods that enable easy and intuitive image registration can help the physician arrive at more optimal diagnoses and better treatment decisions. This review describes a simple, intuitive and robust image registration approach developed in our laboratory. It differs from most other registration techniques in that it allows the user to incorporate all of the available information within the images in the registration process. This method takes full advantage of the ability of knowledgeable operators to achieve image registration and fusion using an intuitive interactive visual approach. It can register images accurately and quickly without the use of elaborate mathematical modeling or optimization techniques. The method provides the operator with tools to manipulate images in three dimensions, including visual feedback techniques to assess the accuracy of registration (grids, overlays, masks, and fusion of images in different colors). Its application is not limited to brain imaging and can be applied to images from any region in the body. The overall effect is a registration algorithm that is easy to implement and can achieve accuracy on the order of one pixel

  1. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  2. Ultrasound and PET-CT image fusion for prostate brachytherapy image guidance

    International Nuclear Information System (INIS)

    Hasford, F.

    2015-01-01

    Fusion of medical images between different cross-sectional modalities is widely used, mostly where functional images are fused with anatomical data. Ultrasound has for some time now been the standard imaging technique used for treatment planning of prostate cancer cases. While this approach is laudable and has yielded some positive results, latest developments have been the integration of images from ultrasound and other modalities such as PET-CT to compliment missing properties of ultrasound images. This study has sought to enhance diagnosis and treatment of prostate cancers by developing MATLAB algorithms to fuse ultrasound and PET-CT images. The fused ultrasound-PET-CT image has shown to contain improved quality of information than the individual input images. The fused image has the property of reduced uncertainty, increased reliability, robust system performance, and compact representation of information. The objective of co-registering the ultrasound and PET-CT images was achieved by conducting performance evaluation of the ultrasound and PET-CT imaging systems, developing image contrast enhancement algorithm, developing MATLAB image fusion algorithm, and assessing accuracy of the fusion algorithm. Performance evaluation of the ultrasound brachytherapy system produced satisfactory results in accordance with set tolerances as recommended by AAPM TG 128. Using an ultrasound brachytherapy quality assurance phantom, average axial distance measurement of 10.11 ± 0.11 mm was estimated. Average lateral distance measurements of 10.08 ± 0.07 mm, 20.01 ± 0.06 mm, 29.89 ± 0.03 mm and 39.84 ± 0.37 mm were estimated for the inter-target distances corresponding to 10 mm, 20 mm, 30 mm and 40 mm respectively. Volume accuracy assessment produced measurements of 3.97 cm 3 , 8.86 cm 3 and 20.11 cm 3 for known standard volumes of 4 cm 3 , 9 cm 3 and 20 cm 3 respectively. Depth of penetration assessment of the ultrasound system produced an estimate of 5.37 ± 0.02 cm

  3. Electrical characterization of bolus material as phantom for use in electrical impedance and computed tomography fusion imaging

    Directory of Open Access Journals (Sweden)

    Parvind Kaur Grewal

    2014-04-01

    Full Text Available Phantoms are widely used in medical imaging to predict image quality prior to clinical imaging. This paper discusses the possible use of bolus material, as a conductivity phantom, for validation and interpretation of electrical impedance tomography (EIT images. Bolus is commonly used in radiation therapy to mimic tissue. When irradiated, it has radiological characteristics similar to tissue. With increased research interest in CT/EIT fusion imaging there is a need to find a material which has both the absorption coefficient and electrical conductivity similar to biological tissues. In the present study the electrical properties, specifically resistivity, of various commercially available bolus materials were characterized by comparing their frequency response with that of in-vivo connective adipose tissue. It was determined that the resistivity of Gelatin Bolus is similar to in-vivo tissue in the frequency range 10 kHz to 1MHz and therefore has potential to be used in EIT/CT fusion imaging studies.

  4. A method based on IHS cylindrical transform model for quality assessment of image fusion

    Science.gov (United States)

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  5. Fusion of multispectral and panchromatic images using multirate filter banks

    Institute of Scientific and Technical Information of China (English)

    Wang Hong; Jing Zhongliang; Li Jianxun

    2005-01-01

    In this paper, an image fusion method based on the filter banks is proposed for merging a high-resolution panchromatic image and a low-resolution multispectral image. Firstly, the filter banks are designed to merge different signals with minimum distortion by using cosine modulation. Then, the filter banks-based image fusion is adopted to obtain a high-resolution multispectral image that combines the spectral characteristic of low-resolution data with the spatial resolution of the panchromatic image. Finally, two different experiments and corresponding performance analysis are presented. Experimental results indicate that the proposed approach outperforms the HIS transform, discrete wavelet transform and discrete wavelet frame.

  6. Advances in fusion of PET, SPET, CT und MRT images

    International Nuclear Information System (INIS)

    Pietrzyk, U.

    2003-01-01

    Image fusion as part of the correlative analysis for medical images has gained ever more interest and the fact that combined systems for PET and CT are commercially available demonstrates the importance for medical diagnostics, therapy and research oriented applications. In this work the basics of image registration, its different strategies and the mathematical and physical background are described. A successful image registration is an essential prerequisite for the next steps, namely correlative medical image analysis. Means to verify image registration and the different modes for integrated display are presented and its usefulness is discussed. Possible limitations in applying image fusion in order to avoid misinterpretation will be pointed out. (orig.) [de

  7. Data fusion of Landsat TM and IRS images in forest classification

    Science.gov (United States)

    Guangxing Wang; Markus Holopainen; Eero Lukkarinen

    2000-01-01

    Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...

  8. Improved detection probability of low level light and infrared image fusion system

    Science.gov (United States)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  9. Color image guided depth image super resolution using fusion filter

    Science.gov (United States)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  10. Percutaneous Thermal Ablation with Ultrasound Guidance. Fusion Imaging Guidance to Improve Conspicuity of Liver Metastasis

    Energy Technology Data Exchange (ETDEWEB)

    Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros; Deschamps, Frederic [Gustave Roussy - Cancer Campus, Interventional Radiology Department (France); Petrover, David [Imagerie Médicale Paris Centre, IMPC (France); Baere, Thierry De [Gustave Roussy - Cancer Campus, Interventional Radiology Department (France)

    2017-05-15

    PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time required for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.

  11. Fusion of Images from Dissimilar Sensor Systems

    National Research Council Canada - National Science Library

    Chow, Khin

    2004-01-01

    Different sensors exploit different regions of the electromagnetic spectrum; therefore a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit...

  12. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    Science.gov (United States)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  13. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    Science.gov (United States)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  14. Fusion of imaging and nonimaging data for surveillance aircraft

    Science.gov (United States)

    Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre

    1997-06-01

    This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).

  15. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    Science.gov (United States)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  16. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    Science.gov (United States)

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  17. X-ray imaging in the laser-fusion program

    International Nuclear Information System (INIS)

    McCall, G.H.

    1977-01-01

    Imaging devices which are used or planned for x-ray imaging in the laser-fusion program are discussed. Resolution criteria are explained, and a suggestion is made for using the modulation transfer function as a uniform definition of resolution for these devices

  18. Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.

    Science.gov (United States)

    Reena Benjamin, J; Jayasree, T

    2018-02-01

    In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.

  19. 3D Image Fusion to Localise Intercostal Arteries During TEVAR

    Directory of Open Access Journals (Sweden)

    G. Koutouzi

    Full Text Available Purpose: Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA, but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR. Technique: The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT, the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. Results: 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. Conclusion: 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia. Keywords: TEVAR, Intercostal artery, Spinal cord ischaemia, 3D image fusion, Image guidance, Cone-beam CT

  20. Performance comparison of different graylevel image fusion schemes through a universal image quality index

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2003-01-01

    We applied a recently introduced universal image quality index Q that quantifies the distortion of a processed image relative to its original version, to assess the performance of different graylevel image fusion schemes. The method is as follows. First, we adopt an original test image as the

  1. Quality Assurance of Serial 3D Image Registration, Fusion, and Segmentation

    International Nuclear Information System (INIS)

    Sharpe, Michael; Brock, Kristy K.

    2008-01-01

    Radiotherapy relies on images to plan, guide, and assess treatment. Image registration, fusion, and segmentation are integral to these processes; specifically for aiding anatomic delineation, assessing organ motion, and aligning targets with treatment beams in image-guided radiation therapy (IGRT). Future developments in image registration will also improve estimations of the actual dose delivered and quantitative assessment in patient follow-up exams. This article summarizes common and emerging technologies and reviews the role of image registration, fusion, and segmentation in radiotherapy processes. The current quality assurance practices are summarized, and implications for clinical procedures are discussed

  2. Multi-Modality Medical Image Fusion Based on Wavelet Analysis and Quality Evaluation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Multi-modality medical image fusion has more and more important applications in medical image analysisand understanding. In this paper, we develop and apply a multi-resolution method based on wavelet pyramid to fusemedical images from different modalities such as PET-MRI and CT-MRI. In particular, we evaluate the different fusionresults when applying different selection rules and obtain optimum combination of fusion parameters.

  3. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  4. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    International Nuclear Information System (INIS)

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-01-01

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  5. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja [Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, Uttar Pradesh 226028 (India); Bao, Le Nguyen [Duytan University, Danang 550000 (Viet Nam); Lay-Ekuakille, Aimé [Department of Innovation Engineering, University of Salento, Lecce 73100 (Italy); Le, Dac-Nhuong, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn [Duytan University, Danang 550000 (Viet Nam); Haiphong University, Haiphong 180000 (Viet Nam)

    2016-07-15

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  6. THERMAL AND VISIBLE SATELLITE IMAGE FUSION USING WAVELET IN REMOTE SENSING AND SATELLITE IMAGE PROCESSING

    Directory of Open Access Journals (Sweden)

    A. H. Ahrari

    2017-09-01

    Full Text Available Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar and different decomposition filters (mean.linear,ma,min and rand for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.

  7. Evaluation of Effective Parameters on Quality of Magnetic Resonance Imaging-computed Tomography Image Fusion in Head and Neck Tumors for Application in Treatment Planning

    Directory of Open Access Journals (Sweden)

    Atefeh Shirvani

    2017-01-01

    Full Text Available Background: In radiation therapy, computed tomography (CT simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P 4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.

  8. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Angel D. Sappa

    2016-06-01

    Full Text Available This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR and Long Wave InfraRed (LWIR.

  9. Image fusion for enhanced forest structural assessment

    CSIR Research Space (South Africa)

    Roberts, JW

    2011-01-01

    Full Text Available This research explores the potential benefits of fusing active and passive medium resolution satellite-borne sensor data for forest structural assessment. Image fusion was applied as a means of retaining disparate data features relevant to modeling...

  10. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    Science.gov (United States)

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. An object-oriented framework for medical image registration, fusion, and visualization.

    Science.gov (United States)

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  12. Prediction-based Audiovisual Fusion for Classification of Non-Linguistic Vocalisations

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Prediction plays a key role in recent computational models of the brain and it has been suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired by these findings we tackle the problem of audiovisual fusion from a new perspective based on prediction. We train

  13. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  14. Enhanced Deforestation Mapping in North Korea using Spatial-temporal Image Fusion Method and Phenology-based Index

    Science.gov (United States)

    Jin, Y.; Lee, D.

    2017-12-01

    North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods

  15. IMPROVING THE QUALITY OF NEAR-INFRARED IMAGING OF IN VIVOBLOOD VESSELS USING IMAGE FUSION METHODS

    DEFF Research Database (Denmark)

    Jensen, Andreas Kryger; Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard

    2009-01-01

    We investigate methods for improving the visual quality of in vivo images of blood vessels in the human forearm. Using a near-infrared light source and a dual CCD chip camera system capable of capturing images at visual and nearinfrared spectra, we evaluate three fusion methods in terms...... of their capability of enhancing the blood vessels while preserving the spectral signature of the original color image. Furthermore, we investigate a possibility of removing hair in the images using a fusion rule based on the "a trous" stationary wavelet decomposition. The method with the best overall performance...... with both speed and quality in mind is the Intensity Injection method. Using the developed system and the methods presented in this article, it is possible to create images of high visual quality with highly emphasized blood vessels....

  16. On the increase of predictive performance with high-level data fusion

    International Nuclear Information System (INIS)

    Doeswijk, T.G.; Smilde, A.K.; Hageman, J.A.; Westerhuis, J.A.; Eeuwijk, F.A. van

    2011-01-01

    The combination of the different data sources for classification purposes, also called data fusion, can be done at different levels: low-level, i.e. concatenating data matrices, medium-level, i.e. concatenating data matrices after feature selection and high-level, i.e. combining model outputs. In this paper the predictive performance of high-level data fusion is investigated. Partial least squares is used on each of the data sets and dummy variables representing the classes are used as response variables. Based on the estimated responses y-hat j for data set j and class k, a Gaussian distribution p(g k |y-hat j ) is fitted. A simulation study is performed that shows the theoretical performance of high-level data fusion for two classes and two data sets. Within group correlations of the predicted responses of the two models and differences between the predictive ability of each of the separate models and the fused models are studied. Results show that the error rate is always less than or equal to the best performing subset and can theoretically approach zero. Negative within group correlations always improve the predictive performance. However, if the data sets have a joint basis, as with metabolomics data, this is not likely to happen. For equally performing individual classifiers the best results are expected for small within group correlations. Fusion of a non-predictive classifier with a classifier that exhibits discriminative ability lead to increased predictive performance if the within group correlations are strong. An example with real life data shows the applicability of the simulation results.

  17. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  18. The 'Lumbar Fusion Outcome Score' (LUFOS): a new practical and surgically oriented grading system for preoperative prediction of surgical outcomes after lumbar spinal fusion in patients with degenerative disc disease and refractory chronic axial low back pain.

    Science.gov (United States)

    Mattei, Tobias A; Rehman, Azeem A; Teles, Alisson R; Aldag, Jean C; Dinh, Dzung H; McCall, Todd D

    2017-01-01

    In order to evaluate the predictive effect of non-invasive preoperative imaging methods on surgical outcomes of lumbar fusion for patients with degenerative disc disease (DDD) and refractory chronic axial low back pain (LBP), the authors conducted a retrospective review of 45 patients with DDD and refractory LBP submitted to anterior lumbar interbody fusion (ALIF) at a single center from 2007 to 2010. Surgical outcomes - as measured by Visual Analog Scale (VAS/back pain) and Oswestry Disability Index (ODI) - were evaluated pre-operatively and at 6 weeks, 3 months, 6 months, and 1 year post-operatively. Linear mixed-effects models were generated in order to identify possible preoperative imaging characteristics (including bone scan/99mTc scintigraphy increased endplate uptake, Modic endplate changes, and disc degeneration graded according to Pfirrmann classification) which may be predictive of long-term surgical outcomes . After controlling for confounders, a combined score, the Lumbar Fusion Outcome Score (LUFOS), was developed. The LUFOS grading system was able to stratify patients in two general groups (Non-surgical: LUFOS 0 and 1; Surgical: LUFOS 2 and 3) that presented significantly different surgical outcomes in terms of estimated marginal means of VAS/back pain (p = 0.001) and ODI (p = 0.006) beginning at 3 months and continuing up to 1 year of follow-up. In conclusion,  LUFOS has been devised as a new practical and surgically oriented grading system based on simple key parameters from non-invasive preoperative imaging exams (magnetic resonance imaging/MRI and bone scan/99mTc scintigraphy) which has been shown to be highly predictive of surgical outcomes of patients undergoing lumbar fusion for treatment for refractory chronic axial LBP.

  19. CT and MR image fusion using two different methods after prostate brachytherapy: impact on post-implant dosimetric assessment

    International Nuclear Information System (INIS)

    Servois, V.; El Khoury, C.; Lantoine, A.; Ollivier, L.; Neuenschwander, S.; Chauveinc, L.; Cosset, J.M.; Flam, T.; Rosenwald, J.C.

    2003-01-01

    To study different methods of CT and MR images fusion in patient treated by brachytherapy for localized prostate cancer. To compare the results of the dosimetric study realized on CT slices and images fusion. Fourteen cases of patients treated by 1125 were retrospectively studied. The CT examinations were realized with continuous section of 5 mm thickness, and MR images were obtained with a surface coil with contiguous section of 3 mm thickness. For the images fusion process, only the T2 weighted MR sequence was used. Two processes of images fusion were realized for each patient, using as reference marks the bones of the pelvis and the implanted seeds. A quantitative and qualitative appreciation was made by the operators, for each patient and both methods of images fusion. The dosimetric study obtained by a dedicated software was realized on CT images and all types of images fusion. The usual dosimetric indexes (D90, V 100 and V 150) were compared for each type of image. The quantitative results given by the software of images fusion showed a superior accuracy to the one obtained by the pelvic bony reference marks. Conversely, qualitative and quantitative results obtained by the operators showed a better accuracy of the images fusion based on iodine seeds. For two patients out of three presenting a D90 inferior to 145 Gy on CT examination, the D90 was superior to this norm when the dosimetry was based on images fusion, whatever the method used. The images fusion method based on implanted seed matching seems to be more precise than the one using bony reference marks. The dosimetric study realized on images fusion could allow to rectify possible errors, mainly due to difficulties in surrounding prostate contour delimitation on CT images. (authors)

  20. Computer-aided global breast MR image feature analysis for prediction of tumor response to chemotherapy: performance assessment

    Science.gov (United States)

    Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel

    2016-03-01

    Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.

  1. Research on multi-source image fusion technology in haze environment

    Science.gov (United States)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  2. Extended feature-fusion guidelines to improve image-based multi-modal biometrics

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-09-01

    Full Text Available The feature-level, unlike the match score-level, lacks multi-modal fusion guidelines. This work demonstrates a practical approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint...

  3. A Comparison of Accuracy of Image- versus Hardware-based Tracking Technologies in 3D Fusion in Aortic Endografting.

    Science.gov (United States)

    Rolls, A E; Maurel, B; Davis, M; Constantinou, J; Hamilton, G; Mastracci, T M

    2016-09-01

    Fusion of three-dimensional (3D) computed tomography and intraoperative two-dimensional imaging in endovascular surgery relies on manual rigid co-registration of bony landmarks and tracking of hardware to provide a 3D overlay (hardware-based tracking, HWT). An alternative technique (image-based tracking, IMT) uses image recognition to register and place the fusion mask. We present preliminary experience with an agnostic fusion technology that uses IMT, with the aim of comparing the accuracy of overlay for this technology with HWT. Data were collected prospectively for 12 patients. All devices were deployed using both IMT and HWT fusion assistance concurrently. Postoperative analysis of both systems was performed by three blinded expert observers, from selected time-points during the procedures, using the displacement of fusion rings, the overlay of vascular markings and the true ostia of renal arteries. The Mean overlay error and the deviation from mean error was derived using image analysis software. Comparison of the mean overlay error was made between IMT and HWT. The validity of the point-picking technique was assessed. IMT was successful in all of the first 12 cases, whereas technical learning curve challenges thwarted HWT in four cases. When independent operators assessed the degree of accuracy of the overlay, the median error for IMT was 3.9 mm (IQR 2.89-6.24, max 9.5) versus 8.64 mm (IQR 6.1-16.8, max 24.5) for HWT (p = .001). Variance per observer was 0.69 mm(2) and 95% limit of agreement ±1.63. In this preliminary study, the error of magnitude of displacement from the "true anatomy" during image overlay in IMT was less than for HWT. This confirms that ongoing manual re-registration, as recommended by the manufacturer, should be performed for HWT systems to maintain accuracy. The error in position of the fusion markers for IMT was consistent, thus may be considered predictable. Copyright © 2016 European Society for Vascular Surgery. Published by

  4. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  5. Real-time image fusion involving diagnostic ultrasound

    DEFF Research Database (Denmark)

    Ewertsen, Caroline; Săftoiu, Adrian; Gruionu, Lucian G

    2013-01-01

    The aim of our article is to give an overview of the current and future possibilities of real-time image fusion involving ultrasound. We present a review of the existing English-language peer-reviewed literature assessing this technique, which covers technical solutions (for ultrasound...

  6. Development of a novel fusion imaging technique in the diagnosis of hepatobiliary-pancreatic lesions

    International Nuclear Information System (INIS)

    Soga, Koichi; Ochiai, Jun; Miyajima, Takashi; Kassai, Kyoichi; Itani, Kenji; Yagi, Nobuaki; Naito, Yuji

    2013-01-01

    Multi-row detector computed tomography (MDCT) and magnetic resonance cholangiopancreatography (MRCP) play an important role in the imaging diagnosis of hepatobiliary-pancreatic lesions. Here we investigated whether unifying the MDCT and MRCP images onto the same screen using fusion imaging could overcome the limitations of each technique, while still maintaining their benefits. Moreover, because reports of fusion imaging using MDCT and MRCP are rare, we assessed the benefits and limitations of this method for its potential application in a clinical setting. The patient group included 9 men and 11 women. Among the 20 patients, the final diagnoses were as follows: 10 intraductal papillary mucinous neoplasms, 5 biliary system carcinomas, 1 pancreatic adenocarcinoma and 5 non-neoplastic lesions. After transmitting the Digital Imaging and Communication in Medicine data of the MDCT and MRCP images to a workstation, we performed a 3-D organisation of both sets of images using volume rendering for the image fusion. Fusion imaging enabled clear identification of the spatial relationship between a hepatobiliary-pancreatic lesion and the solid viscera and/or vessels. Further, this method facilitated the determination of the relationship between the anatomical position of the lesion and its surroundings more easily than either MDCT or MRCP alone. Fusion imaging is an easy technique to perform and may be a useful tool for planning treatment strategies and for examining pathological changes in hepatobiliary-pancreatic lesions. Additionally, the ease of obtaining the 3-D images suggests the possibility of using these images to plan intervention strategies.

  7. Multimodality Tumor Delineation and Predictive Modelling via Fuzzy-Fusion Deformable Models and Biological Potential Functions

    Science.gov (United States)

    Wasserman, Richard Marc

    The radiation therapy treatment planning (RTTP) process may be subdivided into three planning stages: gross tumor delineation, clinical target delineation, and modality dependent target definition. The research presented will focus on the first two planning tasks. A gross tumor target delineation methodology is proposed which focuses on the integration of MRI, CT, and PET imaging data towards the generation of a mathematically optimal tumor boundary. The solution to this problem is formulated within a framework integrating concepts from the fields of deformable modelling, region growing, fuzzy logic, and data fusion. The resulting fuzzy fusion algorithm can integrate both edge and region information from multiple medical modalities to delineate optimal regions of pathological tissue content. The subclinical boundaries of an infiltrating neoplasm cannot be determined explicitly via traditional imaging methods and are often defined to extend a fixed distance from the gross tumor boundary. In order to improve the clinical target definition process an estimation technique is proposed via which tumor growth may be modelled and subclinical growth predicted. An in vivo, macroscopic primary brain tumor growth model is presented, which may be fit to each patient undergoing treatment, allowing for the prediction of future growth and consequently the ability to estimate subclinical local invasion. Additionally, the patient specific in vivo tumor model will be of significant utility in multiple diagnostic clinical applications.

  8. Millimeter-wave imaging of magnetic fusion plasmas: technology innovations advancing physics understanding

    Science.gov (United States)

    Wang, Y.; Tobias, B.; Chang, Y.-T.; Yu, J.-H.; Li, M.; Hu, F.; Chen, M.; Mamidanna, M.; Phan, T.; Pham, A.-V.; Gu, J.; Liu, X.; Zhu, Y.; Domier, C. W.; Shi, L.; Valeo, E.; Kramer, G. J.; Kuwahara, D.; Nagayama, Y.; Mase, A.; Luhmann, N. C., Jr.

    2017-07-01

    Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. Microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These have the potential to greatly advance microwave fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfvén eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today’s most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.

  9. Live Imaging of Mouse Secondary Palate Fusion

    Czech Academy of Sciences Publication Activity Database

    Kim, S.; Procházka, Jan; Bush, J.O.

    jaro, č. 125 (2017), č. článku e56041. ISSN 1940-087X Institutional support: RVO:68378050 Keywords : Developmental Biology * Issue 125 * live imaging * secondary palate * tissue fusion * cleft * craniofacial Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Developmental biology Impact factor: 1.232, year: 2016

  10. SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation

    Directory of Open Access Journals (Sweden)

    Wu Yiquan

    2017-08-01

    Full Text Available To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the KSingular Value Decomposition (K-SVD method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.

  11. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  12. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    Science.gov (United States)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  13. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    Science.gov (United States)

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  14. Tissue identification with micro-magnetic resonance imaging in a caprine spinal fusion model

    NARCIS (Netherlands)

    Uffen, M.; Krijnen, M.; Hoogendoorn, R.; Strijkers, G.; Everts, V.; Wuisman, P.; Smit, T.

    2008-01-01

    Nonunion is a major complication of spinal interbody fusion. Currently X-ray and computed tomography (CT) are used for evaluating the spinal fusion process. However, both imaging modalities have limitations in judgment of the early stages of this fusion process, as they only visualize mineralized

  15. Spinal fusion-hardware construct: Basic concepts and imaging review

    Science.gov (United States)

    Nouh, Mohamed Ragab

    2012-01-01

    The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979

  16. Spectral edge: gradient-preserving spectral mapping for image fusion.

    Science.gov (United States)

    Connah, David; Drew, Mark S; Finlayson, Graham D

    2015-12-01

    This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

  17. An enhanced approach for biomedical image restoration using image fusion techniques

    Science.gov (United States)

    Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.

    2018-05-01

    Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.

  18. Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2017-01-01

    Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

  19. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  20. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  1. Usefulness of CT based SPECT Fusion Image in the lung Disease : Preliminary Study

    International Nuclear Information System (INIS)

    Park, Hoon Hee; Lyu, Kwang Yeul; Kim, Tae Hyung; Shin, Ji Yun

    2012-01-01

    Recently, SPECT/CT system has been applied to many diseases, however, the application is not extensively applied at pulmonary disease. Especially, in case that, the pulmonary embolisms suspect at the CT images, SPECT is performed. For the accurate diagnosis, SPECT/CT tests are subsequently undergoing. However, without SPECT/CT, there are some limitations to apply these procedures. With SPECT/CT, although, most of the examination performed after CT. Moreover, such a test procedures generate unnecessary dual irradiation problem to the patient. In this study, we evaluated the amount of unnecessary irradiation, and the usefulness of fusion images of pulmonary disease, which independently acquired from SPECT and CT. Using NEMA PhantomTM (NU2-2001), SPECT and CT scan were performed for fusion images. From June 2011 to September 2010, 10 patients who didn't have other personal history, except lung disease were selected (male: 7, female: 3, mean age: 65.3±12.7). In both clinical patient and phantom data, the fusion images scored higher than SPECT and CT images. The fusion images, which is combined with pulmonary vessel images from CT and functional images from SPECT, can increase the detection possibility in detecting pulmonary embolism in the resin of lung parenchyma. It is sure that performing SPECT and CT in integral SPECT/CT system were better. However, we believe this protocol can give more informative data to have more accurate diagnosis in the hospital without integral SPECT/CT system.

  2. Image fusion and denoising using fractional-order gradient information

    DEFF Research Database (Denmark)

    Mei, Jin-Jin; Dong, Yiqiu; Huang, Ting-Zhu

    Image fusion and denoising are significant in image processing because of the availability of multi-sensor and the presence of the noise. The first-order and second-order gradient information have been effectively applied to deal with fusing the noiseless source images. In this paper, due to the adv...... show that the proposed method outperforms the conventional total variation in methods for simultaneously fusing and denoising....

  3. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    Science.gov (United States)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  4. Real-time image registration and fusion in a FPGA architecture (Ad-FIRE)

    Science.gov (United States)

    Waters, T.; Swan, L.; Rickman, R.

    2011-06-01

    Real-time Image Registration is a key processing requirement of Waterfall Solutions' image fusion system, Ad-FIRE, which combines the attributes of high resolution visible imagery with the spectral response of low resolution thermal sensors in a single composite image. Implementing image fusion at video frame rates typically requires a high bandwidth video processing capability which, within a standard CPU-type processing architecture, necessitates bulky, high power components. Field Programmable Gate Arrays (FPGAs) offer the prospect of low power/heat dissipation combined with highly efficient processing architectures for use in portable, battery-powered, passively cooled applications, such as Waterfall Solutions' hand-held or helmet-mounted Ad-FIRE system.

  5. Two-Dimensional Image Fusion of Planar Bone Scintigraphy and Radiographs in Patients with Clinical Scaphoid Fracture: An Imaging Study

    DEFF Research Database (Denmark)

    Henriksen, O.M.; Lonsdale, M.N.; Jensen, T.D.

    2008-01-01

    . Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. Purpose: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation......Background: Although magnetic resonance imaging (MRI) is now considered the gold standard in second-line imaging of patients with suspected scaphoid fracture and negative radiographs, bone scintigraphy can be used in patients with pacemakers, metallic implants, or other contraindications to MRI....... Conclusion: Image fusion of planar bone scintigrams and radiographs has a significant influence on image interpretation and increases both diagnostic confidence and interobserver agreement Udgivelsesdato: 2008/12/3...

  6. Two-dimensional fusion imaging of planar bone scintigraphy and radiographs in patients with clinical scaphoid fracture: an imaging study

    DEFF Research Database (Denmark)

    Henriksen, Otto Mølby; Lonsdale, Markus Georg; Jensen, T D

    2009-01-01

    . Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. PURPOSE: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation......BACKGROUND: Although magnetic resonance imaging (MRI) is now considered the gold standard in second-line imaging of patients with suspected scaphoid fracture and negative radiographs, bone scintigraphy can be used in patients with pacemakers, metallic implants, or other contraindications to MRI....... CONCLUSION: Image fusion of planar bone scintigrams and radiographs has a significant influence on image interpretation and increases both diagnostic confidence and interobserver agreement....

  7. Enhancing Health Risk Prediction with Deep Learning on Big Data and Revised Fusion Node Paradigm

    Directory of Open Access Journals (Sweden)

    Hongye Zhong

    2017-01-01

    Full Text Available With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves the complex application of machine-learning algorithms, such as Bayesian fusions and neural network, for data extraction and logical inference. Deep learning, combined with information fusion paradigms, can be utilized to provide more comprehensive and reliable predictions from big health data. Based on the proposed framework, an experimental system is developed as an illustration for the framework implementation.

  8. Enabling image fusion for a CT guided needle placement robot

    Science.gov (United States)

    Seifabadi, Reza; Xu, Sheng; Aalamifar, Fereshteh; Velusamy, Gnanasekar; Puhazhendi, Kaliyappan; Wood, Bradford J.

    2017-03-01

    Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm +/- 4.14mm and 1.74mm +/-1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.

  9. A Modified Spatiotemporal Fusion Algorithm Using Phenological Information for Predicting Reflectance of Paddy Rice in Southern China

    Directory of Open Access Journals (Sweden)

    Mengxue Liu

    2018-05-01

    Full Text Available Satellite data for studying surface dynamics in heterogeneous landscapes are missing due to frequent cloud contamination, low temporal resolution, and technological difficulties in developing satellites. A modified spatiotemporal fusion algorithm for predicting the reflectance of paddy rice is presented in this paper. The algorithm uses phenological information extracted from a moderate-resolution imaging spectroradiometer enhanced vegetation index time series to improve the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM. The algorithm is tested with satellite data on Yueyang City, China. The main contribution of the modified algorithm is the selection of similar neighborhood pixels by using phenological information to improve accuracy. Results show that the modified algorithm performs better than ESTARFM in visual inspection and quantitative metrics, especially for paddy rice. This modified algorithm provides not only new ideas for the improvement of spatiotemporal data fusion method, but also technical support for the generation of remote sensing data with high spatial and temporal resolution.

  10. Progressive multi-atlas label fusion by dictionary evolution.

    Science.gov (United States)

    Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang

    2017-02-01

    Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Fusion of MODIS and landsat-8 surface temperature images: a new approach.

    Science.gov (United States)

    Hazaymeh, Khaled; Hassan, Quazi K

    2015-01-01

    Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93-0.94, 0.94-0.99; and 2.97-20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084-0.90, 0.061-0.080, and 0.003-0.004, respectively.

  12. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    Science.gov (United States)

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  13. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    Science.gov (United States)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  14. Clinical use of digital retrospective image fusion of CT, MRI, FDG-PET and SPECT - fields of indications and results

    International Nuclear Information System (INIS)

    Lemke, A.J.; Niehues, S.M.; Amthauer, H.; Felix, R.; Rohlfing, T.; Hosten, N.

    2004-01-01

    Purpose: To evaluate the feasibility and the clinical benefits of retrospective digital image fusion (PET, SPECT, CT and MRI). Materials and methods: In a prospective study, a total of 273 image fusions were performed and evaluated. The underlying image acquisitions (CT, MRI, SPECT and PET) were performed in a way appropriate for the respective clinical question and anatomical region. Image fusion was executed with a software program developed during this study. The results of the image fusion procedure were evaluated in terms of technical feasibility, clinical objective, and therapeutic impact. Results: The most frequent combinations of modalities were CT/PET (n = 156) and MRI/PET (n = 59), followed by MRI/SPECT (n = 28), CT/SPECT (n = 22) and CT/MRI (n = 8). The clinical questions included following regions (more than one region per case possible): neurocranium (n = 42), neck (n = 13), lung and mediastinum (n = 24), abdomen (n = 181), and pelvis (n = 65). In 92.6% of all cases (n = 253), image fusion was technically successful. Image fusion was able to improve sensitivity and specificity of the single modality, or to add important diagnostic information. Image fusion was problematic in cases of different body positions between the two imaging modalities or different positions of mobile organs. In 37.9% of the cases, image fusion added clinically relevant information compared to the single modality. Conclusion: For clinical questions concerning liver, pancreas, rectum, neck, or neurocranium, image fusion is a reliable method suitable for routine clinical application. Organ motion still limits its feasibility and routine use in other areas (e.g., thorax). (orig.)

  15. Coherence imaging spectro-polarimetry for magnetic fusion diagnostics

    International Nuclear Information System (INIS)

    Howard, J

    2010-01-01

    This paper presents an overview of developments in imaging spectro-polarimetry for magnetic fusion diagnostics. Using various multiplexing strategies, it is possible to construct optical polarization interferometers that deliver images of underlying physical parameters such as flow speed, temperature (Doppler effect) or magnetic pitch angle (motional Stark and Zeeman effects). This paper also describes and presents first results for a new spatial heterodyne interferometric system used for both Doppler and polarization spectroscopy.

  16. Three-dimensional Image Fusion Guidance for Transjugular Intrahepatic Portosystemic Shunt Placement.

    Science.gov (United States)

    Tacher, Vania; Petit, Arthur; Derbel, Haytham; Novelli, Luigi; Vitellius, Manuel; Ridouani, Fourat; Luciani, Alain; Rahmouni, Alain; Duvoux, Christophe; Salloum, Chady; Chiaradia, Mélanie; Kobeiter, Hicham

    2017-11-01

    To assess the safety, feasibility and effectiveness of image fusion guidance with pre-procedural portal phase computed tomography with intraprocedural fluoroscopy for transjugular intrahepatic portosystemic shunt (TIPS) placement. All consecutive cirrhotic patients presenting at our interventional unit for TIPS creation from January 2015 to January 2016 were prospectively enrolled. Procedures were performed under general anesthesia in an interventional suite equipped with flat panel detector, cone-beam computed tomography (CBCT) and image fusion technique. All TIPSs were placed under image fusion guidance. After hepatic vein catheterization, an unenhanced CBCT acquisition was performed and co-registered with the pre-procedural portal phase CT images. A virtual path between hepatic vein and portal branch was made using the virtual needle path trajectory software. Subsequently, the 3D virtual path was overlaid on 2D fluoroscopy for guidance during portal branch cannulation. Safety, feasibility, effectiveness and per-procedural data were evaluated. Sixteen patients (12 males; median age 56 years) were included. Procedures were technically feasible in 15 of the 16 patients (94%). One procedure was aborted due to hepatic vein catheterization failure related to severe liver distortion. No periprocedural complications occurred within 48 h of the procedure. The median dose-area product was 91 Gy cm 2 , fluoroscopy time 15 min, procedure time 40 min and contrast media consumption 65 mL. Clinical benefit of the TIPS placement was observed in nine patients (56%). This study suggests that 3D image fusion guidance for TIPS is feasible, safe and effective. By identifying virtual needle path, CBCT enables real-time multiplanar guidance and may facilitate TIPS placement.

  17. Myometrial invasion and overall staging of endometrial carcinoma: assessment using fusion of T2-weighted magnetic resonance imaging and diffusion-weighted magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Guo Y

    2017-12-01

    Full Text Available Yu Guo,1,2 Ping Wang,2 Penghui Wang,2 Wei Gao,1 Fenge Li,3 Xueling Yang,1 Hongyan Ni,2 Wen Shen,2 Zhi Guo1 1Department of Interventional Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, Tianjin’s Clinical Research Center for Cancer, Tianjin, 2Department of Radiology, Tianjin First Center Hospital, The First Central Clinical College of Tianjin Medical University, Tianjin, 3Department of Gynecology, Tianjin First Center Hospital, Tianjin, People’s Republic of China Background: The age of onset of endometrial carcinoma has been decreasing in recent years. In endometrial carcinoma, it is important to accurately assess invasion depth and preoperative staging. Fusion of T2-weighted magnetic resonance imaging (T2WI and diffusion-weighted magnetic resonance imaging (DWI may contribute to the improvement of anatomical localization of lesions.Materials and methods: In our study, a total of 58 endometrial carcinoma cases were included. Based on the revised 2009 International Federation of Gynecology and Obstetrics staging system, a fusion of T2WI and DWI was utilized for the evaluation of invasion depth and determination of the overall stage. Postoperative pathologic assessment was considered as the reference standard. The consistency of T2WI image staging and pathologic staging, and the consistency of fused T2WI and DWI and pathologic staging were all analyzed using Kappa statistics.Results: Compared with the T2WI group, a significantly higher diagnostic accuracy was observed for myometrial invasion with fusion of T2WI and DWI (77.6% for T2WI; 94.8% for T2WI-DWI. For the identification of deep invasion, we calculated values for diagnostic sensitivity (69.2% for T2WI; 92.3% for T2WI-DWI, specificity (80% for T2WI; 95.6% for T2WI-DWI, positive predictive value (50% for T2WI; 85.7% for T2WI-DWI, and negative predictive value (90% for

  18. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  19. Validating Inertial Confinement Fusion (ICF) predictive capability using perturbed capsules

    Science.gov (United States)

    Schmitt, Mark; Magelssen, Glenn; Tregillis, Ian; Hsu, Scott; Bradley, Paul; Dodd, Evan; Cobble, James; Flippo, Kirk; Offerman, Dustin; Obrey, Kimberly; Wang, Yi-Ming; Watt, Robert; Wilke, Mark; Wysocki, Frederick; Batha, Steven

    2009-11-01

    Achieving ignition on NIF is a monumental step on the path toward utilizing fusion as a controlled energy source. Obtaining robust ignition requires accurate ICF models to predict the degradation of ignition caused by heterogeneities in capsule construction and irradiation. LANL has embarked on a project to induce controlled defects in capsules to validate our ability to predict their effects on fusion burn. These efforts include the validation of feature-driven hydrodynamics and mix in a convergent geometry. This capability is needed to determine the performance of capsules imploded under less-than-optimum conditions on future IFE facilities. LANL's recently initiated Defect Implosion Experiments (DIME) conducted at Rochester's Omega facility are providing input for these efforts. Recent simulation and experimental results will be shown.

  20. CT, MRI and PET image fusion using the ProSoma 3D simulation software

    International Nuclear Information System (INIS)

    Dalah, E.; Bradley, D.A.; Nisbet, A.; Reise, S.

    2008-01-01

    Full text: Multi-modality imaging is involved in almost all oncology applications focusing on the extent of disease and target volume delineation. Commercial image fusion software packages are becoming available but require comprehensive evaluation to ensure reliability of fusion and the underpinning registration algorithm particularly for radiotherapy. The present work seeks to assess such accuracy for a number of available registration methods provided by the commercial package ProSoma. A NEMA body phantom was used in evaluating CT, MR and PET images. In addition, discussion is provided concerning the choice and geometry of fiducial markers in phantom studies and the effect of window-level on target size, in particular in regard to the application of multi modality imaging in treatment planning. In general, the accuracy of fusion of multi-modality images was within 0.5-1.5 mm of actual feature diameters and < 2 ml volume of actual values, particularly in CT images. (author)

  1. CT-MR image data fusion for computer assisted navigated neurosurgery of temporal bone tumors

    International Nuclear Information System (INIS)

    Nemec, Stefan Franz; Donat, Markus Alexander; Mehrain, Sheida; Friedrich, Klaus; Krestan, Christian; Matula, Christian; Imhof, Herwig; Czerny, Christian

    2007-01-01

    Purpose: To demonstrate the value of multi detector computed tomography (MDCT) and magnetic resonance imaging (MRI) in the preoperative work up of temporal bone tumors and to present, especially, CT and MR image fusion for surgical planning and performance in computer assisted navigated neurosurgery of temporal bone tumors. Materials and methods: Fifteen patients with temporal bone tumors underwent MDCT and MRI. MDCT was performed in high-resolution bone window level setting in axial plane. The reconstructed MDCT slice thickness was 0.8 mm. MRI was performed in axial and coronal plane with T2-weighted fast spin-echo (FSE) sequences, un-enhanced and contrast-enhanced T1-weighted spin-echo (SE) sequences, and coronal T1-weighted SE sequences with fat suppression and with 3D T1-weighted gradient-echo (GE) contrast-enhanced sequences in axial plane. The 3D T1-weighted GE sequence had a slice thickness of 1 mm. Image data sets of CT and 3D T1-weighted GE sequences were merged utilizing a workstation to create CT-MR fusion images. MDCT and MR images were separately used to depict and characterize lesions. The fusion images were utilized for interventional planning and intraoperative image guidance. The intraoperative accuracy of the navigation unit was measured, defined as the deviation between the same landmark in the navigation image and the patient. Results: Tumorous lesions of bone and soft tissue were well delineated and characterized by CT and MR images. The images played a crucial role in the differentiation of benign and malignant pathologies, which consisted of 13 benign and 2 malignant tumors. The CT-MR fusion images supported the surgeon in preoperative planning and improved surgical performance. The mean intraoperative accuracy of the navigation system was 1.25 mm. Conclusion: CT and MRI are essential in the preoperative work up of temporal bone tumors. CT-MR image data fusion presents an accurate tool for planning the correct surgical procedure and is a

  2. Echocardiographic and Fluoroscopic Fusion Imaging for Procedural Guidance: An Overview and Early Clinical Experience.

    Science.gov (United States)

    Thaden, Jeremy J; Sanon, Saurabh; Geske, Jeffrey B; Eleid, Mackram F; Nijhof, Niels; Malouf, Joseph F; Rihal, Charanjit S; Bruce, Charles J

    2016-06-01

    There has been significant growth in the volume and complexity of percutaneous structural heart procedures in the past decade. Increasing procedural complexity and accompanying reliance on multimodality imaging have fueled the development of fusion imaging to facilitate procedural guidance. The first clinically available system capable of echocardiographic and fluoroscopic fusion for real-time guidance of structural heart procedures was approved by the US Food and Drug Administration in 2012. Echocardiographic-fluoroscopic fusion imaging combines the precise catheter and device visualization of fluoroscopy with the soft tissue anatomy and color flow Doppler information afforded by echocardiography in a single image. This allows the interventionalist to perform precise catheter manipulations under fluoroscopy guidance while visualizing critical tissue anatomy provided by echocardiography. However, there are few data available addressing this technology's strengths and limitations in routine clinical practice. The authors provide a critical review of currently available echocardiographic-fluoroscopic fusion imaging for guidance of structural heart interventions to highlight its strengths, limitations, and potential clinical applications and to guide further research into value of this emerging technology. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  3. Leg pain and psychological variables predict outcome 2-3 years after lumbar fusion surgery.

    Science.gov (United States)

    Abbott, Allan D; Tyni-Lenné, Raija; Hedlund, Rune

    2011-10-01

    Prediction studies testing a thorough range of psychological variables in addition to demographic, work-related and clinical variables are lacking in lumbar fusion surgery research. This prospective cohort study aimed at examining predictions of functional disability, back pain and health-related quality of life (HRQOL) 2-3 years after lumbar fusion by regressing nonlinear relations in a multivariate predictive model of pre-surgical variables. Before and 2-3 years after lumbar fusion surgery, patients completed measures investigating demographics, work-related variables, clinical variables, functional self-efficacy, outcome expectancy, fear of movement/(re)injury, mental health and pain coping. Categorical regression with optimal scaling transformation, elastic net regularization and bootstrapping were used to investigate predictor variables and address predictive model validity. The most parsimonious and stable subset of pre-surgical predictor variables explained 41.6, 36.0 and 25.6% of the variance in functional disability, back pain intensity and HRQOL 2-3 years after lumbar fusion. Pre-surgical control over pain significantly predicted functional disability and HRQOL. Pre-surgical catastrophizing and leg pain intensity significantly predicted functional disability and back pain while the pre-surgical straight leg raise significantly predicted back pain. Post-operative psychomotor therapy also significantly predicted functional disability while pre-surgical outcome expectations significantly predicted HRQOL. For the median dichotomised classification of functional disability, back pain intensity and HRQOL levels 2-3 years post-surgery, the discriminative ability of the prediction models was of good quality. The results demonstrate the importance of pre-surgical psychological factors, leg pain intensity, straight leg raise and post-operative psychomotor therapy in the predictions of functional disability, back pain and HRQOL-related outcomes.

  4. Magnetic Resonance and Ultrasound Image Fusion Supported Transperineal Prostate Biopsy Using the Ginsburg Protocol: Technique, Learning Points, and Biopsy Results.

    Science.gov (United States)

    Hansen, Nienke; Patruno, Giulio; Wadhwa, Karan; Gaziev, Gabriele; Miano, Roberto; Barrett, Tristan; Gnanapragasam, Vincent; Doble, Andrew; Warren, Anne; Bratt, Ola; Kastner, Christof

    2016-08-01

    Prostate biopsy supported by transperineal image fusion has recently been developed as a new method to the improve accuracy of prostate cancer detection. To describe the Ginsburg protocol for transperineal prostate biopsy supported by multiparametric magnetic resonance imaging (mpMRI) and transrectal ultrasound (TRUS) image fusion, provide learning points for its application, and report biopsy results. The article is supplemented by a Surgery in Motion video. This single-centre retrospective outcome study included 534 patients from March 2012 to October 2015. A total of 107 had no previous prostate biopsy, 295 had benign TRUS-guided biopsies, and 159 were on active surveillance for low-risk cancer. A Likert scale reported mpMRI for suspicion of cancer from 1 (no suspicion) to 5 (cancer highly likely). Transperineal biopsies were obtained under general anaesthesia using BiopSee fusion software (Medcom, Darmstadt, Germany). All patients had systematic biopsies, two cores from each of 12 anatomic sectors. Likert 3-5 lesions were targeted with a further two cores per lesion. Any cancer and Gleason score 7-10 cancer on biopsy were noted. Descriptive statistics and positive predictive values (PPVs) and negative predictive values (NPVs) were calculated. The detection rate of Gleason score 7-10 cancer was similar across clinical groups. Likert scale 3-5 MRI lesions were reported in 378 (71%) of the patients. Cancer was detected in 249 (66%) and Gleason score 7-10 cancer was noted in 157 (42%) of these patients. PPV for detecting 7-10 cancer was 0.15 for Likert score 3, 0.43 for score 4, and 0.63 for score 5. NPV of Likert 1-2 findings was 0.87 for Gleason score 7-10 and 0.97 for Gleason score ≥4+3=7 cancer. Limitations include lack of data on complications. Transperineal prostate biopsy supported by MRI/TRUS image fusion using the Ginsburg protocol yielded high detection rates of Gleason score 7-10 cancer. Because the NPV for excluding Gleason score 7-10 cancer was very

  5. The fusion of large scale classified side-scan sonar image mosaics.

    Science.gov (United States)

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  6. Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.

    Science.gov (United States)

    Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn

    2016-04-20

    Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.

  7. The role of data fusion in predictive maintenance using digital twin

    Science.gov (United States)

    Liu, Zheng; Meyendorf, Norbert; Mrad, Nezih

    2018-04-01

    Modern aerospace industry is migrating from reactive to proactive and predictive maintenance to increase platform operational availability and efficiency, extend its useful life cycle and reduce its life cycle cost. Multiphysics modeling together with data-driven analytics generate a new paradigm called "Digital Twin." The digital twin is actually a living model of the physical asset or system, which continually adapts to operational changes based on the collected online data and information, and can forecast the future of the corresponding physical counterpart. This paper reviews the overall framework to develop a digital twin coupled with the industrial Internet of Things technology to advance aerospace platforms autonomy. Data fusion techniques particularly play a significant role in the digital twin framework. The flow of information from raw data to high-level decision making is propelled by sensor-to-sensor, sensor-to-model, and model-to-model fusion. This paper further discusses and identifies the role of data fusion in the digital twin framework for aircraft predictive maintenance.

  8. Neutron imaging for inertial confinement fusion and molecular optic imaging

    International Nuclear Information System (INIS)

    Delage, O.

    2010-01-01

    Scientific domains that require imaging of micrometric/nano-metric objects are dramatically increasing (Plasma Physics, Astrophysics, Biotechnology, Earth Sciences...). Difficulties encountered in imaging smaller and smaller objects make this research area more and more challenging and in constant evolution. The two scientific domains, through which this study has been led, are the neutron imaging in the context of the inertial confinement fusion and the fluorescence molecular imaging. Work presented in this thesis has two main objectives. The first one is to describe the instrumentation characteristics that require such imagery and, relatively to the scientific domains considered, identify parameters likely to optimize the imaging system accuracy. The second one is to present the developed data analysis and reconstruction methods able to provide spatial resolution adapted to the size of the observed object. Similarities of numerical algorithms used in these two scientific domains, which goals are quiet different, show how micrometric/nano-metric object imaging is a research area at the border of a large number of scientific disciplines. (author)

  9. Predicting operative blood loss during spinal fusion for adolescent idiopathic scoliosis.

    Science.gov (United States)

    Ialenti, Marc N; Lonner, Baron S; Verma, Kushagra; Dean, Laura; Valdevit, Antonio; Errico, Thomas

    2013-06-01

    Patient and surgical factors are known to influence operative blood loss in spinal fusion for adolescent idiopathic scoliosis (AIS), but have only been loosely identified. To date, there are no established recommendations to guide decisions to predonate autologous blood, and the current practice is based primarily on surgeon preference. This study is designed to determine which patient and surgical factors are correlated with, and predictive of, blood loss during spinal fusion for AIS. Retrospective analysis of 340 (81 males, 259 females; mean age, 15.2 y) consecutive AIS patients treated by a single surgeon from 2000 to 2008. Demographic (sex, age, height, weight, and associated comorbidities), laboratory (hematocrit, platelet, PT/PTT/INR), standard radiographic, and perioperative data including complications were analyzed with a linear stepwise regression to develop a predictive model of blood loss. Estimated blood loss was 907±775 mL for posterior spinal fusion (PSF, n=188), 323±171 mL for anterior spinal fusion (ASF, n=124), and 1277±821 mL for combined procedures (n=28). For patients undergoing PSF, stepwise analysis identified sex, preoperative kyphosis, and operative time to be the most important predictors of increased blood loss (Ploss in PSF: blood loss (mL)=C+Op-time (min)×(6.4)-pre-op T2-T12 kyphosis (degrees)×(8.7), C=233 if male and -270 if female. We find sex, operative time, and preoperative kyphosis to be the most important predictors of increased blood loss in PSF for AIS. Mean arterial pressure and operative time were predictive of estimated blood loss in ASF. For posterior fusions, we also present a model that estimates blood loss preoperatively and can be used to guide decisions regarding predonation of blood and the use of antifibrinolytic agents. Retrospective study: Level II.

  10. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    Science.gov (United States)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  11. Ultrasound-guided image fusion with computed tomography and magnetic resonance imaging. Clinical utility for imaging and interventional diagnostics of hepatic lesions

    International Nuclear Information System (INIS)

    Clevert, D.A.; Helck, A.; Paprottka, P.M.; Trumm, C.; Reiser, M.F.; Zengel, P.

    2012-01-01

    Abdominal ultrasound is often the first-line imaging modality for assessing focal liver lesions. Due to various new ultrasound techniques, such as image fusion, global positioning system (GPS) tracking and needle tracking guided biopsy, abdominal ultrasound now has great potential regarding detection, characterization and treatment of focal liver lesions. Furthermore, these new techniques will help to improve the clinical management of patients before and during interventional procedures. This article presents the principle and clinical impact of recently developed techniques in the field of ultrasound, e.g. image fusion, GPS tracking and needle tracking guided biopsy and discusses the results based on a feasibility study on 20 patients with focal hepatic lesions. (orig.) [de

  12. Development and application of PET-MRI image fusion technology

    International Nuclear Information System (INIS)

    Song Jianhua; Zhao Jinhua; Qiao Wenli

    2011-01-01

    The emerging and growing in popularity of PET-CT scanner brings us the convenience and cognizes the advantages such as diagnosis, staging, curative effect evaluation and prognosis for malignant tumor. And the PET-MRI installing maybe a new upsurge when the machine gradually mature, because of the MRI examination without the radiation exposure and with the higher soft tissue resolution. This paper summarized the developing course of image fusion technology and some researches of clinical application about PET-MRI at present, in order to help people to understand the functions and know its wide application of the upcoming new instrument, mainly focuses the application on the central nervous system and some soft tissue lesions. And before PET-MRI popularization, people can still carry out some researches of various image fusion and clinical application on the current equipment. (authors)

  13. Study on Efficiency of Fusion Techniques for IKONOS Images

    International Nuclear Information System (INIS)

    Liu, Yanmei; Yu, Haiyang; Guijun, Yang; Nie, Chenwei; Yang, Xiaodong; Ren, Dong

    2014-01-01

    Many image fusion techniques have been proposed to achieve optimal resolution in the spatial and spectral domains. Six different merging methods were listed in this paper and the efficiency of fusion techniques was assessed in qualitative and quantitative aspect. Both local and global evaluation parameters were used in the spectral quality and a Laplace filter method was used in spatial quality assessment. By simulation, the spectral quality of the images merged by Brovery was demonstrated to be the worst. In contrast, GS and PCA algorithms, especially the Pansharpening provided higher spectral quality than the standard Brovery, wavelet and CN methods. In spatial quality assessment, the CN method represented best compared with that of others, while the Brovery algorithm was worst. The wavelet parameters that performed best achieved acceptable spectral and spatial quality compared to the others

  14. Image Fusion Technologies In Commercial Remote Sensing Packages

    OpenAIRE

    Al-Wassai, Firouz Abdullah; Kalyankar, N. V.

    2013-01-01

    Several remote sensing software packages are used to the explicit purpose of analyzing and visualizing remotely sensed data, with the developing of remote sensing sensor technologies from last ten years. Accord-ing to literature, the remote sensing is still the lack of software tools for effective information extraction from remote sensing data. So, this paper provides a state-of-art of multi-sensor image fusion technologies as well as review on the quality evaluation of the single image or f...

  15. Added Value of Contrast-Enhanced Ultrasound on Biopsies of Focal Hepatic Lesions Invisible on Fusion Imaging Guidance.

    Science.gov (United States)

    Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun

    2017-01-01

    To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5-1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.

  16. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    Science.gov (United States)

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  17. of Hypoxia-Inducible Factor-1α Activity by the Fusion of High-Resolution SPECT and Morphological Imaging Tests

    Directory of Open Access Journals (Sweden)

    Hirofumi Fujii

    2012-01-01

    Full Text Available Purpose. We aimed to clearly visualize heterogeneous distribution of hypoxia-inducible factor 1α (HIF activity in tumor tissues in vivo. Methods. We synthesized of 125I-IPOS, a 125I labeled chimeric protein probe, that would visualize HIF activity. The biodistribution of 125I-IPOS in FM3A tumor-bearing mice was evaluated. Then, the intratumoral localization of this probe was observed by autoradiography, and it was compared with histopathological findings. The distribution of 125I-IPOS in tumors was imaged by a small animal SPECT/CT scanner. The obtained in vivo SPECT-CT fusion images were compared with ex vivo images of excised tumors. Fusion imaging with MRI was also examined. Results. 125I-IPOS well accumulated in FM3A tumors. The intratumoral distribution of 125I-IPOS by autoradiography was quite heterogeneous, and it partially overlapped with that of pimonidazole. High-resolution SPECT-CT fusion images successfully demonstrated the heterogeneity of 125I-IPOS distribution inside tumors. SPECT-MRI fusion images could give more detailed information about the intratumoral distribution of 125I-IPOS. Conclusion. High-resolution SPECT images successfully demonstrated heterogeneous intratumoral distribution of 125I-IPOS. SPECT-CT fusion images, more favorably SPECT-MRI fusion images, would be useful to understand the features of heterogeneous intratumoral expression of HIF activity in vivo.

  18. SPECT/CT image fusion with 99mTc-HYNIC-TOC in the oncological diagnostic

    International Nuclear Information System (INIS)

    Haeusler, F.

    2006-07-01

    Neuroendocrine tumours displaying somatostatin receptors have been successfully visualized with somatostatin receptor imaging. The aim of this retrospective study was to evaluate the value of anatomical-functional image fusion. Image fusion means the combined transmission and emission tomography (computed tomography (CT)) and single-photon emission computed tomography (SPECT) ) and was analyzed in comparison with SPECT and CT alone. Fifty-three patients (30 men and 23 women; mean age 55,9 years; range: 20-82 years) with suspected or known endocrine tumours were studied. The patients were referred to image fusion because of staging of newly diagnosed tumours (14) or biochemically/clinically suspected neuroendocrine tumour (20) or follow-up studies after therapy (19). The patients were studied with SPECT at 2 and 4 hours after injection of 400 MBq of 99mTc-EDDA-HYNIC-Tyr3-octreotide using a dual-detector scintillation camera. The CT was performed on one of the following two days. For both investigations the patients were fixed in an individualized vacuum mattress to guarantee exactly the same position. SPECT and SPECT/CT showed an equivalent scan result in 35 patients (66 %), discrepancies were found in 18 cases (34 %). After image fusion the scan result was true-positive in 27 patients ( 50.9 %) and true-negative in 25 patients (47.2 %). One patient with multiple small liver metastases escaped SPECT as well as image fusion and was so false-negative. The frequency of equivocal and probable lesion characterization was reduced by 11.6% (12 to 0) with PET/CT in comparison with PET or CT alone. The frequency of definite lesion characterization was increased by 11.6% (91 to 103). SPECT/CT affected the clinical management in 21 patients (40 %). The results of this study indicate that SPECT/CT is a valuable tool for the assessment of neuroendocrine tumours. SPECT/CT is better than SPECT or CT alone and it allows a more precise staging and determination of prognosis and

  19. Self-assessed performance improves statistical fusion of image labels

    Energy Technology Data Exchange (ETDEWEB)

    Bryan, Frederick W., E-mail: frederick.w.bryan@vanderbilt.edu; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Reich, Daniel S. [Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland 20892 (United States); Landman, Bennett A. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); and Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37235 (United States)

    2014-03-15

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  20. Self-assessed performance improves statistical fusion of image labels

    International Nuclear Information System (INIS)

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  1. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  2. Clinical value of CT/MR-US fusion imaging for radiofrequency ablation of hepatic nodules

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Young, E-mail: leejy4u@snu.ac.kr [Department of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of); Choi, Byung Ihn [Department of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of); Chung, Yong Eun [Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Kim, Min Wook; Kim, Se Hyung; Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2012-09-15

    Objective: The aim of this study was to determine the registration error of an ultrasound (US) fusion imaging system during an ex vivo study and its clinical value for percutaneous radiofrequency ablation (pRFA) during an in vivo study. Materials and methods: An ex vivo study was performed using 4 bovine livers and 66 sonographically invisible lead pellets. Real-time CT-US fusion imaging was applied to assist the targeting of pellets with needles in each liver; the 4 sessions were performed by either an experienced radiologist (R1, 3 sessions) or an inexperienced resident (R2, 1 session). The distance between the pellet target and needle was measured. An in vivo study was retrospectively performed with 51 nodules (42 HCCs and 9 metastases; mean diameter, 16 mm) of 37 patients. Fusion imaging was used to create a sufficient safety margin (>5 mm) during pRFA in 24 nodules (group 1), accurately target 21 nodules obscured in the US images (group 2) and precisely identify 6 nodules surrounded by similar looking nodules (group 3). Image fusion was achieved using MR and CT images in 16 and 21 patients, respectively. The reablation rate, 1-year local recurrence rate and complications were assessed. Results: In the ex vivo study, the mean target–needle distances were 2.7 mm ± 1.9 mm (R1) and 3.1 ± 3.3 mm (R2) (p > 0.05). In the in vivo study, the reablation rates in groups 1–3 were 13%, 19% and 0%, respectively. At 1 year, the local recurrence rate was 11.8% (6/51). In our assessment of complications, one bile duct injury was observed. Conclusion: US fusion imaging system has an acceptable registration error and can be an efficacious tool for overcoming the major limitations of US-guided pRFA.

  3. Multimodality imaging of reporter gene expression using a novel fusion vector in living cells and animals

    Science.gov (United States)

    Gambhir, Sanjiv [Portola Valley, CA; Pritha, Ray [Mountain View, CA

    2011-06-07

    Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.

  4. A hybrid image fusion system for endovascular interventions of peripheral artery disease.

    Science.gov (United States)

    Lalys, Florent; Favre, Ketty; Villena, Alexandre; Durrmann, Vincent; Colleaux, Mathieu; Lucas, Antoine; Kaladji, Adrien

    2018-03-16

    Interventional endovascular treatment has become the first line of management in the treatment of peripheral artery disease (PAD). However, contrast and radiation exposure continue to limit the feasibility of these procedures. This paper presents a novel hybrid image fusion system for endovascular intervention of PAD. We present two different roadmapping methods from intra- and pre-interventional imaging that can be used either simultaneously or independently, constituting the navigation system. The navigation system is decomposed into several steps that can be entirely integrated within the procedure workflow without modifying it to benefit from the roadmapping. First, a 2D panorama of the entire peripheral artery system is automatically created based on a sequence of stepping fluoroscopic images acquired during the intra-interventional diagnosis phase. During the interventional phase, the live image can be synchronized on the panorama to form the basis of the image fusion system. Two types of augmented information are then integrated. First, an angiography panorama is proposed to avoid contrast media re-injection. Information exploiting the pre-interventional computed tomography angiography (CTA) is also brought to the surgeon by means of semiautomatic 3D/2D registration on the 2D panorama. Each step of the workflow was independently validated. Experiments for both the 2D panorama creation and the synchronization processes showed very accurate results (errors of 1.24 and [Formula: see text] mm, respectively), similarly to the registration on the 3D CTA (errors of [Formula: see text] mm), with minimal user interaction and very low computation time. First results of an on-going clinical study highlighted its major clinical added value on intraoperative parameters. No image fusion system has been proposed yet for endovascular procedures of PAD in lower extremities. More globally, such a navigation system, combining image fusion from different 2D and 3D image

  5. Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection.

    Science.gov (United States)

    Wei, Pan; Ball, John E; Anderson, Derek T

    2018-03-17

    A significant challenge in object detection is accurate identification of an object's position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications.

  6. Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection

    Directory of Open Access Journals (Sweden)

    Pan Wei

    2018-03-01

    Full Text Available A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS applications.

  7. An acceleration system for Laplacian image fusion based on SoC

    Science.gov (United States)

    Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng

    2018-04-01

    Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.

  8. A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.

    Directory of Open Access Journals (Sweden)

    Lu Guo

    Full Text Available To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors.A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT and tri-modality (MRI/CT/PET image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV, the average distance between surface and centroid (ADSC, and the local standard deviation (SDlocal. Analysis of COV was also performed to evaluate intra-observer volume variation.The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09 and 0.07(± 0.01 for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05 with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm and patient 3 (from 0.42 cm to 0.36 cm with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00 with the tri-modality method as compared with using the dual-modality method.With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.

  9. Ultrasound/Magnetic Resonance Image Fusion Guided Lumbosacral Plexus Block – A Clinical Study

    DEFF Research Database (Denmark)

    Strid, JM; Pedersen, Erik Morre; Søballe, Kjeld

    2014-01-01

    in a double-blinded randomized controlled trial with crossover design. MR datasets will be acquired and uploaded in an advanced US system (Epiq7, Phillips, Amsterdam, Netherlands). All volunteers will receive SSPS blocks with lidocaine added gadolinium contrast guided by US/MR image fusion and by US one week......Background and aims Ultrasound (US) guided lumbosacral plexus block (Supra Sacral Parallel Shift [SSPS]) offers an alternative to general anaesthesia and perioperative analgesia for hip surgery.1 The complex anatomy of the lumbosacral region hampers the accuracy of the block, but it may be improved...... by guidance of US and magnetic resonance (MR) image fusion and real-time 3D electronic needle tip tracking.2 We aim to estimate the effect and the distribution of lidocaine after SSPS guided by US/MR image fusion compared to SSPS guided by ultrasound. Methods Twenty-four healthy volunteers will be included...

  10. A New Fusion Technique of Remote Sensing Images for Land Use/Cover

    Institute of Scientific and Technical Information of China (English)

    WU Lian-Xi; SUN Bo; ZHOU Sheng-Lu; HUANG Shu-E; ZHAO Qi-Guo

    2004-01-01

    In China,accelerating industrialization and urbanization following high-speed economic development and population increases have greatly impacted land use/cover changes,making it imperative to obtain accurate and up to date information on changes so as to evaluate their environmental effects. The major purpose of this study was to develop a new method to fuse lower spatial resolution multispectral satellite images with higher spatial resolution panchromatic ones to assist in land use/cover mapping. An algorithm of a new fusion method known as edge enhancement intensity modulation (EEIM) was proposed to merge two optical image data sets of different spectral ranges. The results showed that the EEIM image was quite similar in color to lower resolution multispectral images,and the fused product was better able to preserve spectral information. Thus,compared to conventional approaches,the spectral distortion of the fused images was markedly reduced. Therefore,the EEIM fusion method could be utilized to fuse remote sensing data from the same or different sensors,including TM images and SPOT5 panchromatic images,providing high quality land use/cover images.

  11. Added value of contrast-enhanced ultrasound on biopsies of focal hepatic lesions invisible on fusion imaging guidance

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2017-01-15

    To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.

  12. Added value of contrast-enhanced ultrasound on biopsies of focal hepatic lesions invisible on fusion imaging guidance

    International Nuclear Information System (INIS)

    Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun

    2017-01-01

    To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making

  13. Noise temperature improvement for magnetic fusion plasma millimeter wave imaging systems

    Energy Technology Data Exchange (ETDEWEB)

    Lai, J.; Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California at Davis, Davis, California 95616 (United States)

    2014-03-15

    Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas [B. Tobias et al., Plasma Fusion Res. 6, 2106042 (2011)]. Of particular importance have been microwave electron cyclotron emission imaging and microwave imaging reflectometry systems for imaging T{sub e} and n{sub e} fluctuations. These instruments have employed heterodyne receiver arrays with Schottky diode mixer elements directly connected to individual antennas. Consequently, the noise temperature has been strongly determined by the conversion loss with typical noise temperatures of ∼60 000 K. However, this can be significantly improved by making use of recent advances in Monolithic Microwave Integrated Circuit chip low noise amplifiers to insert a pre-amplifier in front of the Schottky diode mixer element. In a proof-of-principle design at V-Band (50–75 GHz), significant improvement of noise temperature from the current 60 000 K to measured 4000 K has been obtained.

  14. Noise temperature improvement for magnetic fusion plasma millimeter wave imaging systems.

    Science.gov (United States)

    Lai, J; Domier, C W; Luhmann, N C

    2014-03-01

    Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas [B. Tobias et al., Plasma Fusion Res. 6, 2106042 (2011)]. Of particular importance have been microwave electron cyclotron emission imaging and microwave imaging reflectometry systems for imaging T(e) and n(e) fluctuations. These instruments have employed heterodyne receiver arrays with Schottky diode mixer elements directly connected to individual antennas. Consequently, the noise temperature has been strongly determined by the conversion loss with typical noise temperatures of ~60,000 K. However, this can be significantly improved by making use of recent advances in Monolithic Microwave Integrated Circuit chip low noise amplifiers to insert a pre-amplifier in front of the Schottky diode mixer element. In a proof-of-principle design at V-Band (50-75 GHz), significant improvement of noise temperature from the current 60,000 K to measured 4000 K has been obtained.

  15. Noise temperature improvement for magnetic fusion plasma millimeter wave imaging systems

    International Nuclear Information System (INIS)

    Lai, J.; Domier, C. W.; Luhmann, N. C.

    2014-01-01

    Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas [B. Tobias et al., Plasma Fusion Res. 6, 2106042 (2011)]. Of particular importance have been microwave electron cyclotron emission imaging and microwave imaging reflectometry systems for imaging T e and n e fluctuations. These instruments have employed heterodyne receiver arrays with Schottky diode mixer elements directly connected to individual antennas. Consequently, the noise temperature has been strongly determined by the conversion loss with typical noise temperatures of ∼60 000 K. However, this can be significantly improved by making use of recent advances in Monolithic Microwave Integrated Circuit chip low noise amplifiers to insert a pre-amplifier in front of the Schottky diode mixer element. In a proof-of-principle design at V-Band (50–75 GHz), significant improvement of noise temperature from the current 60 000 K to measured 4000 K has been obtained

  16. A novel image fusion algorithm based on 2D scale-mixing complex wavelet transform and Bayesian MAP estimation for multimodal medical images

    Directory of Open Access Journals (Sweden)

    Abdallah Bengueddoudj

    2017-05-01

    Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.

  17. Functional and morphological imaging of thyroid associated eye disease. Data evaluation by means of image fusion

    International Nuclear Information System (INIS)

    Kainz, H.

    2002-08-01

    Aim: to recognize the structures that show an uptake of a 99mTc-labeled octreotide tracer within the orbit and head in patients with thyroid associated eye disease relying on image fusion. Methods: A series of 18 patients presenting the signs and symptoms of thyroid associated eye disease were studied. Functional imaging was done with 99mTc-HYNIC-TOC, a newly in-house developed tracer. Both whole body as well as single photon emission tomographies (SPECT) of the head were obtained in each patient. Parallel to nuclear medicine imaging, morphological imaging was done using either computed tomography or magnetic resonance. Results: By means of image fusion farther more information on the functional status of the patients was obtained. All areas showing an uptake could be anatomically identified, revealing a series of organs that had not yet been consideren in this disease. The organs presenting tracer uptake showed characteristic forms as described below: - eye glass sign: lacrimal gland and lacrimal ducts - scissors sign: eye muscles, rectus sup. and inf. - arch on CT: muscle displacement - Omega sign: tonsils and salivary glands - W- sign: tonsils and salivary glands Conclusions: By means of image fusion it was possible to recognize that a series of organs of the neck and head express somatostatin receptors. We interpret these results as a sign of inflammation of the lacrimal glands, the lacrimal ducts, the cervical lymphatics, the anterior portions of the extra ocular eye muscles and muscles of the posterior cervical region. Somatostatin uptake in these sturctures reflects the prescence of specific receptors which reflect the immuno regulating function of the peptide. (author)

  18. Automatic Registration Method for Fusion of ZY-1-02C Satellite Images

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2013-12-01

    Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.

  19. Live-cell imaging of conidial anastomosis tube fusion during colony initiation in Fusarium oxysporum.

    Directory of Open Access Journals (Sweden)

    Smija M Kurian

    Full Text Available Fusarium oxysporum exhibits conidial anastomosis tube (CAT fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging.

  20. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    Directory of Open Access Journals (Sweden)

    Alexander Toet

    Full Text Available The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm, near-infrared (NIR, 0.7-1.0μm and long-wave infrared (LWIR, 8-14μm motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer. The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs

  1. Fusion of High b-valve diffusion-weighted and T2-weighted MR images improves identification of lymph nodes in the pelvis

    International Nuclear Information System (INIS)

    Mir, N.; Sohaib, S.A.; Collins, D.; Koh, D.M.

    2010-01-01

    Full text: Accurate identification of lymph nodes facilities nodal assessment by size, morphological or MR lymphographic criteria. We compared the MR detection of lymph nodes in patients with pelvic cancers using T2-weighted imaging, and fusion of diffusion-weighted imaging (OWl) and T2-weighted imaging. Twenty patients with pelvic tumours underwent 5-mm axial T2-weighted and OWl (b-values 0-750 s/mm 2 ) on a L 5T system. Fusion images of b = 750 s/mm 2 diffusion-weighted MR and T2-weighted images were created. Two radiologists evaluated in consensus the T2-weighted images and fusion images independently. For each image set, the location and diameter of pelvic nodes were recorded, and nodal visibility was scored using a 4-point scale (0-3). Nodal visualisation was compared using Relative to an Identified Distribution (RIDIT) analysis. The mean RIDIT score describes the probability that a randomly selected node will be better visualised relative to the other image set. One hundred fourteen pelvic nodes (mean 5.9 mm; 2-10 mm) were identified on T2-weighted images and 161 nodes (mean 4.3 mm; 2-10 mm) on fusion images. Using fusion images, 47 additional nodes were detected compared with T2-weighted images alone (eight external iliac, 24 inguinal, 12 obturator, two peri-rectal, one presacral). Nodes detected only on fusion images were 2-9 mm (mean 3.7 mm). Nodal visualisation was better using fusion images compared with T2-weighted images (mean RIDIT score 0.689 vs 0.302). Fusion of diffusion-weighted MR with T2-weighted images improves identification of pelvic lymph nodes compared with T2-weighted images alone. The improved nodal identification may aid treatment planning and further nodal characterisation.

  2. Anato-metabolic fusion of PET, CT and MRI images; Anatometabolische Bildfusion von PET, CT und MRT

    Energy Technology Data Exchange (ETDEWEB)

    Przetak, C.; Baum, R.P.; Niesen, A. [Zentralklinik Bad Berka (Germany). Klinik fuer Nuklearmedizin/PET-Zentrum; Slomka, P. [University of Western Ontario, Toronto (Canada). Health Sciences Centre; Proeschild, A.; Leonhardi, J. [Zentralklinik Bad Berka (Germany). Inst. fuer bildgebende Diagnostik

    2000-12-01

    The fusion of cross-sectional images - especially in oncology - appears to be a very helpful tool to improve the diagnostic and therapeutic accuracy. Though many advantages exist, image fusion is applied routinely only in a few hospitals. To introduce image fusion as a common procedure, technical and logistical conditions have to be fulfilled which are related to long term archiving of digital data, data transfer and improvement of the available software in terms of usefulness and documentation. The accuracy of coregistration and the quality of image fusion has to be validated by further controlled studies. (orig.) [German] Zur Erhoehung der diagnostischen und therapeutischen Sicherheit ist die Fusion von Schnittbildern verschiedener tomographischer Verfahren insbesondere in der Onkologie sehr hilfreich. Trotz bestehender Vorteile hat die Bildfusion bisher nur in einzelnen Zentren Einzug in die nuklearmedizinische und radiologische Routinediagnostik gefunden. Um die Bildfusion allgemein einsetzen zu koennen, sind bestimmte technische und logistische Voraussetzungen notwendig. Dies betrifft die Langzeitarchivierung von diagitalen Daten, die Moeglichkeiten zur Datenuebertragung und die Weiterentwicklung der verfuegbaren Software, auch was den Bedienkomfort und die Dokumentation anbelangt. Zudem ist es notwendig, die Exaktheit der Koregistrierung und damit die Qualitaet der Bildfusion durch kontrollierte Studien zu validieren. (orig.)

  3. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    International Nuclear Information System (INIS)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin; Hollingsworth, Alan B.; Qian, Wei

    2015-01-01

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy

  4. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin, E-mail: Bin.Zheng-1@ou.edu [School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States); Hollingsworth, Alan B. [Mercy Women’s Center, Mercy Health Center, Oklahoma City, Oklahoma 73120 (United States); Qian, Wei [Department of Electrical and Computer Engineering, University of Texas, El Paso, Texas 79968 (United States)

    2015-11-15

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.

  5. Fusion of magnetic resonance angiography and magnetic resonance imaging for surgical planning for meningioma. Technical note

    International Nuclear Information System (INIS)

    Kashimura, Hiroshi; Ogasawara, Kuniaki; Arai, Hiroshi

    2008-01-01

    A fusion technique for magnetic resonance (MR) angiography and MR imaging was developed to help assess the peritumoral angioarchitecture during surgical planning for meningioma. Three-dimensional time-of-flight (3D-TOF) and 3D-spoiled gradient recalled (SPGR) datasets were obtained from 10 patients with intracranial meningioma, and fused using newly developed volume registration and visualization software. Maximum intensity projection (MIP) images from 3D-TOF MR angiography and axial SPGR MR imaging were displayed at the same time on the monitor. Selecting a vessel on the real-time MIP image indicated the corresponding points on the axial image automatically. Fusion images showed displacement of the anterior cerebral or middle cerebral artery in 7 patients and encasement of the anterior cerebral arteries in I patient, with no relationship between the main arterial trunk and tumor in 2 patients. Fusion of MR angiography and MR imaging can clarify relationships between the intracranial vasculature and meningioma, and may be helpful for surgical planning for meningioma. (author)

  6. Large area imaging of hydrogenous materials using fast neutrons from a DD fusion generator

    Energy Technology Data Exchange (ETDEWEB)

    Cremer, J.T., E-mail: ted@adelphitech.com [Adelphi Technology Inc., 2003 East Bayshore Road, Redwood City, California 94063 (United States); Williams, D.L.; Gary, C.K.; Piestrup, M.A.; Faber, D.R.; Fuller, M.J.; Vainionpaa, J.H.; Apodaca, M. [Adelphi Technology Inc., 2003 East Bayshore Road, Redwood City, California 94063 (United States); Pantell, R.H.; Feinstein, J. [Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States)

    2012-05-21

    A small-laboratory fast-neutron generator and a large area detector were used to image hydrogen-bearing materials. The overall image resolution of 2.5 mm was determined by a knife-edge measurement. Contact images of objects were obtained in 5-50 min exposures by placing them close to a plastic scintillator at distances of 1.5 to 3.2 m from the neutron source. The generator produces 10{sup 9} n/s from the DD fusion reaction at a small target. The combination of the DD-fusion generator and electronic camera permits both small laboratory and field-portable imaging of hydrogen-rich materials embedded in high density materials.

  7. Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.

    Science.gov (United States)

    Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan

    2018-06-15

    Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.

  8. Dual Channel Pulse Coupled Neural Network Algorithm for Fusion of Multimodality Brain Images with Quality Analysis

    Directory of Open Access Journals (Sweden)

    Kavitha SRINIVASAN

    2014-09-01

    Full Text Available Background: In the review of medical imaging techniques, an important fact that emerged is that radiologists and physicians still are in a need of high-resolution medical images with complementary information from different modalities to ensure efficient analysis. This requirement should have been sorted out using fusion techniques with the fused image being used in image-guided surgery, image-guided radiotherapy and non-invasive diagnosis. Aim: This paper focuses on Dual Channel Pulse Coupled Neural Network (PCNN Algorithm for fusion of multimodality brain images and the fused image is further analyzed using subjective (human perception and objective (statistical measures for the quality analysis. Material and Methods: The modalities used in fusion are CT, MRI with subtypes T1/T2/PD/GAD, PET and SPECT, since the information from each modality is complementary to one another. The objective measures selected for evaluation of fused image were: Information Entropy (IE - image quality, Mutual Information (MI – deviation in fused to the source images and Signal to Noise Ratio (SNR – noise level, for analysis. Eight sets of brain images with different modalities (T2 with T1, T2 with CT, PD with T2, PD with GAD, T2 with GAD, T2 with SPECT-Tc, T2 with SPECT-Ti, T2 with PET are chosen for experimental purpose and the proposed technique is compared with existing fusion methods such as the Average method, the Contrast pyramid, the Shift Invariant Discrete Wavelet Transform (SIDWT with Harr and the Morphological pyramid, using the selected measures to ascertain relative performance. Results: The IE value and SNR value of the fused image derived from dual channel PCNN is higher than other fusion methods, shows that the quality is better with less noise. Conclusion: The fused image resulting from the proposed method retains the contrast, shape and texture as in source images without false information or information loss.

  9. Spatial resolution enhancement of satellite image data using fusion approach

    Science.gov (United States)

    Lestiana, H.; Sukristiyanti

    2018-02-01

    Object identification using remote sensing data has a problem when the spatial resolution is not in accordance with the object. The fusion approach is one of methods to solve the problem, to improve the object recognition and to increase the objects information by combining data from multiple sensors. The application of fusion image can be used to estimate the environmental component that is needed to monitor in multiple views, such as evapotranspiration estimation, 3D ground-based characterisation, smart city application, urban environments, terrestrial mapping, and water vegetation. Based on fusion application method, the visible object in land area has been easily recognized using the method. The variety of object information in land area has increased the variation of environmental component estimation. The difficulties in recognizing the invisible object like Submarine Groundwater Discharge (SGD), especially in tropical area, might be decreased by the fusion method. The less variation of the object in the sea surface temperature is a challenge to be solved.

  10. Fusion of PET and MRI for Hybrid Imaging

    Science.gov (United States)

    Cho, Zang-Hee; Son, Young-Don; Kim, Young-Bo; Yoo, Seung-Schik

    Recently, the development of the fusion PET-MRI system has been actively studied to meet the increasing demand for integrated molecular and anatomical imaging. MRI can provide detailed anatomical information on the brain, such as the locations of gray and white matter, blood vessels, axonal tracts with high resolution, while PET can measure molecular and genetic information, such as glucose metabolism, neurotransmitter-neuroreceptor binding and affinity, protein-protein interactions, and gene trafficking among biological tissues. State-of-the-art MRI systems, such as the 7.0 T whole-body MRI, now can visualize super-fine structures including neuronal bundles in the pons, fine blood vessels (such as lenticulostriate arteries) without invasive contrast agents, in vivo hippocampal substructures, and substantia nigra with excellent image contrast. High-resolution PET, known as High-Resolution Research Tomograph (HRRT), is a brain-dedicated system capable of imaging minute changes of chemicals, such as neurotransmitters and -receptors, with high spatial resolution and sensitivity. The synergistic power of the two, i.e., ultra high-resolution anatomical information offered by a 7.0 T MRI system combined with the high-sensitivity molecular information offered by HRRT-PET, will significantly elevate the level of our current understanding of the human brain, one of the most delicate, complex, and mysterious biological organs. This chapter introduces MRI, PET, and PET-MRI fusion system, and its algorithms are discussed in detail.

  11. Identity fusion predicts endorsement of pro-group behaviours targeting nationality, religion, or football in Brazilian samples.

    Science.gov (United States)

    Bortolini, Tiago; Newson, Martha; Natividade, Jean Carlos; Vázquez, Alexandra; Gómez, Ángel

    2018-04-01

    A visceral feeling of oneness with a group - identity fusion - has proven to be a stronger predictor of pro-group behaviours than other measures of group bonding, such as group identification. However, the relationship between identity fusion, other group alignment measures and their different roles in predicting pro-group behaviour is still controversial. Here, we test whether identity fusion is related to, but different from, unidimensional and multidimensional measures of group identification. We also show that identity fusion explains further variance of the endorsement of pro-group behaviour than these alternative measures and examine the structural and discriminant properties of identity fusion and group identification measures in three different contexts: nationality, religion, and football fandom. Finally, we extend the fusion literature to a new culture: Brazil. To the best of our knowledge, this is the first research explicitly addressing a comparison between these two forms of group alignment, identity fusion and identification with a group, and their role in predicting pro-group behaviours. © 2018 The British Psychological Society.

  12. Prediction of Quadcopter State through Multi-Microphone Side-Channel Fusion

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Garg, Kashish; Kim, Munsung; Li, Jonathan; Volk, Anja; Franchetti, Franz

    Improving trust in the state of Cyber-Physical Systems becomes increasingly important as more tasks become autonomous. We present a multi-microphone machine learning fusion approach to accurately predict complex states of a quadcopter drone in flight from the sound it makes using audio content

  13. Medium resolution image fusion, does it enhance forest structure assessment

    CSIR Research Space (South Africa)

    Roberts, JW

    2008-07-01

    Full Text Available This research explored the potential benefits of fusing optical and Synthetic Aperture Radar (SAR) medium resolution satellite-borne sensor data for forest structural assessment. Image fusion was applied as a means of retaining disparate data...

  14. HALO: a reconfigurable image enhancement and multisensor fusion system

    Science.gov (United States)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  15. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    International Nuclear Information System (INIS)

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-01-01

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  16. Diagnostic performance of fluorodeoxyglucose positron emission tomography/magnetic resonance imaging fusion images of gynecological malignant tumors. Comparison with positron emission tomography/computed tomography

    International Nuclear Information System (INIS)

    Nakajo, Kazuya; Tatsumi, Mitsuaki; Inoue, Atsuo

    2010-01-01

    We compared the diagnostic accuracy of fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) and PET/magnetic resonance imaging (MRI) fusion images for gynecological malignancies. A total of 31 patients with gynecological malignancies were enrolled. FDG-PET images were fused to CT, T1- and T2-weighted images (T1WI, T2WI). PET-MRI fusion was performed semiautomatically. We performed three types of evaluation to demonstrate the usefulness of PET/MRI fusion images in comparison with that of inline PET/CT as follows: depiction of the uterus and the ovarian lesions on CT or MRI mapping images (first evaluation); additional information for lesion localization with PET and mapping images (second evaluation); and the image quality of fusion on interpretation (third evaluation). For the first evaluation, the score for T2WI (4.68±0.65) was significantly higher than that for CT (3.54±1.02) or T1WI (3.71±0.97) (P<0.01). For the second evaluation, the scores for the localization of FDG accumulation showing that T2WI (2.74±0.57) provided significantly more additional information for the identification of anatomical sites of FDG accumulation than did CT (2.06±0.68) or T1WI (2.23±0.61) (P<0.01). For the third evaluation, the three-point rating scale for the patient group as a whole demonstrated that PET/T2WI (2.72±0.54) localized the lesion significantly more convincingly than PET/CT (2.23±0.50) or PET/T1WI (2.29±0.53) (P<0.01). PET/T2WI fusion images are superior for the detection and localization of gynecological malignancies. (author)

  17. Biodistribution and tumor imaging of an anti-CEA single-chain antibody-albumin fusion protein

    International Nuclear Information System (INIS)

    Yazaki, Paul J.; Kassa, Thewodros; Cheung, Chia-wei; Crow, Desiree M.; Sherman, Mark A.; Bading, James R.; Anderson, Anne-Line J.; Colcher, David; Raubitschek, Andrew

    2008-01-01

    Albumin fusion proteins have demonstrated the ability to prolong the in vivo half-life of small therapeutic proteins/peptides in the circulation and thereby potentially increase their therapeutic efficacy. To evaluate if this format can be employed for antibody-based imaging, an anticarcinoembryonic antigen (CEA) single-chain antibody(scFv)-albumin fusion protein was designed, expressed and radiolabeled for biodistribution and imaging studies in athymic mice bearing human colorectal carcinoma LS-174T xenografts. The [ 125 I]-T84.66 fusion protein demonstrated rapid tumor uptake of 12.3% injected dose per gram (ID/g) at 4 h that reached a plateau of 22.7% ID/g by 18 h. This was a dramatic increase in tumor uptake compared to 4.9% ID/g for the scFv alone. The radiometal [ 111 In]-labeled version resulted in higher tumor uptake, 37.2% ID/g at 18 h, which persisted at the tumor site with tumor: blood ratios reaching 18:1 and with normal tissues showing limited uptake. Based on these favorable imaging properties, a pilot [ 64 Cu]-positron emission tomography imaging study was performed with promising results. The anti-CEA T84.66 scFv-albumin fusion protein demonstrates highly specific tumor uptake that is comparable to cognate recombinant antibody fragments. The radiometal-labeled version, which shows lower normal tissue accumulation than these recombinant antibodies, provides a promising and novel platform for antibody-based imaging agents

  18. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    Science.gov (United States)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  19. Volume navigation with contrast enhanced ultrasound and image fusion for percutaneous interventions: first results.

    Directory of Open Access Journals (Sweden)

    Ernst Michael Jung

    Full Text Available OBJECTIVE: Assessing the feasibility and efficiency of interventions using ultrasound (US volume navigation (V Nav with real time needle tracking and image fusion with contrast enhanced (ce CT, MRI or US. METHODS: First an in vitro study on a liver phantom with CT data image fusion was performed, involving the puncture of a 10 mm lesion in a depth of 5 cm performed by 15 examiners with US guided freehand technique vs. V Nav for the purpose of time optimization. Then 23 patients underwent ultrasound-navigated biopsies or interventions using V Nav image fusion of live ultrasound with ceCT, ceMRI or CEUS, which were acquired before the intervention. A CEUS data set was acquired in all patients. Image fusion was established for CEUS and CT or CEUS and MRI using anatomical landmarks in the area of the targeted lesion. The definition of a virtual biopsy line with navigational axes targeting the lesion was achieved by the usage of sterile trocar with a magnetic sensor embedded in its distal tip employing a dedicated navigation software for real time needle tracking. RESULTS: The in vitro study showed significantly less time needed for the simulated interventions in all examiners when V Nav was used (p<0.05. In the study involving patients, in all 10 biopsies of suspect lesions of the liver a histological confirmation was achieved. We also used V Nav for a breast biopsy (intraductal carcinoma, for a biopsy of the abdominal wall (metastasis of ovarial carcinoma and for radiofrequency ablations (4 ablations. In 8 cases of inflammatory abdominal lesions 9 percutaneous drainages were successfully inserted. CONCLUSION: Percutaneous biopsies and drainages, even of small lesions involving complex access pathways, can be accomplished with a high success rate by using 3D real time image fusion together with real time needle tracking.

  20. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  1. Thought–shape fusion and body image in eating disorders

    Directory of Open Access Journals (Sweden)

    Jáuregui-Lobera I

    2012-10-01

    Full Text Available Ignacio Jáuregui-Lobera,1 Patricia Bolaños-Ríos,2 Inmaculada Ruiz-Prieto21Department of Nutrition and Bromatology, Pablo de Olavide University, Seville, Spain; 2Behavioral Sciences Institute, Seville, SpainPurpose: The aim of this study was to analyze the relationships among thought–shape fusion (TSF, specific instruments to assess body image disturbances, and body image quality of life in eating disorder patients in order to improve the understanding of the links between body image concerns and a specific bias consisting of beliefs about the consequences of thinking about forbidden foods.Patients and methods: The final sample included 76 eating disorder patients (mean age 20.13 ± 2.28 years; 59 women and seven men. After having obtained informed consent, the following questionnaires were administered: Body Appreciation Scale (BAS, Body Image Quality of Life Inventory (BIQLI-SP, Body Shape Questionnaire (BSQ, Eating Disorders Inventory-2 (EDI-2, State-Trait Anxiety Inventory (STAI, Symptom Checklist-90-Revised (SCL-90-R and Thought-Shape Fusion Questionnaire (TSF-Q.Results: Significant correlations were found between TSF-Q and body image-related variables. Those with higher scores in TSF showed higher scores in the BSQ (P < 0.0001, Eating Disorder Inventory – Drive for Thinness (EDI-DT (P < 0.0001, and Eating Disorder Inventory – Body Dissatisfaction (EDI-BD (P < 0.0001. The same patients showed lower scores in the BAS (P < 0.0001. With respect to the psychopathological variables, patients with high TSF obtained higher scores in all SCL-90-R subscales as well as in the STAI.Conclusion: The current study shows the interrelations among different body image-related variables, TSF, and body image quality of life.Keywords: cognitive distortions, quality of life, body appreciation, psychopathology, anorexia nervosa, bulimia nervosa

  2. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  3. First downscattered neutron images from Inertial Confinement Fusion experiments at the National Ignition Facility

    Directory of Open Access Journals (Sweden)

    Guler Nevzat

    2013-11-01

    Full Text Available Inertial Confinement Fusion experiments at the National Ignition Facility (NIF are designed to understand and test the basic principles of self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT filled cryogenic plastic (CH capsules. The experimental campaign is ongoing to tune the implosions and characterize the burning plasma conditions. Nuclear diagnostics play an important role in measuring the characteristics of these burning plasmas, providing feedback to improve the implosion dynamics. The Neutron Imaging (NI diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by collecting images at two different energy bands for primary (13–15 MeV and downscattered (10–12 MeV neutrons. From these distributions, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. The first downscattered neutron images from imploding ICF capsules are shown in this paper.

  4. First downscattered neutron images from Inertial Confinement Fusion experiments at the National Ignition Facility

    Science.gov (United States)

    Guler, Nevzat; Aragonez, Robert J.; Archuleta, Thomas N.; Batha, Steven H.; Clark, David D.; Clark, Deborah J.; Danly, Chris R.; Day, Robert D.; Fatherley, Valerie E.; Finch, Joshua P.; Gallegos, Robert A.; Garcia, Felix P.; Grim, Gary; Hsu, Albert H.; Jaramillo, Steven A.; Loomis, Eric N.; Mares, Danielle; Martinson, Drew D.; Merrill, Frank E.; Morgan, George L.; Munson, Carter; Murphy, Thomas J.; Oertel, John A.; Polk, Paul J.; Schmidt, Derek W.; Tregillis, Ian L.; Valdez, Adelaida C.; Volegov, Petr L.; Wang, Tai-Sen F.; Wilde, Carl H.; Wilke, Mark D.; Wilson, Douglas C.; Atkinson, Dennis P.; Bower, Dan E.; Drury, Owen B.; Dzenitis, John M.; Felker, Brian; Fittinghoff, David N.; Frank, Matthias; Liddick, Sean N.; Moran, Michael J.; Roberson, George P.; Weiss, Paul; Buckles, Robert A.; Cradick, Jerry R.; Kaufman, Morris I.; Lutz, Steve S.; Malone, Robert M.; Traille, Albert

    2013-11-01

    Inertial Confinement Fusion experiments at the National Ignition Facility (NIF) are designed to understand and test the basic principles of self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic (CH) capsules. The experimental campaign is ongoing to tune the implosions and characterize the burning plasma conditions. Nuclear diagnostics play an important role in measuring the characteristics of these burning plasmas, providing feedback to improve the implosion dynamics. The Neutron Imaging (NI) diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by collecting images at two different energy bands for primary (13-15 MeV) and downscattered (10-12 MeV) neutrons. From these distributions, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. The first downscattered neutron images from imploding ICF capsules are shown in this paper.

  5. Image Fusion Based on the Self-Organizing Feature Map Neural Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhaoli; SUN Shenghe

    2001-01-01

    This paper presents a new image datafusion scheme based on the self-organizing featuremap (SOFM) neural networks.The scheme consists ofthree steps:(1) pre-processing of the images,whereweighted median filtering removes part of the noisecomponents corrupting the image,(2) pixel clusteringfor each image using two-dimensional self-organizingfeature map neural networks,and (3) fusion of the im-ages obtained in Step (2) utilizing fuzzy logic,whichsuppresses the residual noise components and thusfurther improves the image quality.It proves thatsuch a three-step combination offers an impressive ef-fectiveness and performance improvement,which isconfirmed by simulations involving three image sen-sors (each of which has a different noise structure).

  6. Geophysical data fusion for subsurface imaging

    International Nuclear Information System (INIS)

    Hoekstra, P.; Vandergraft, J.; Blohm, M.; Porter, D.

    1993-08-01

    A geophysical data fusion methodology is under development to combine data from complementary geophysical sensors and incorporate geophysical understanding to obtain three dimensional images of the subsurface. The research reported here is the first phase of a three phase project. The project focuses on the characterization of thin clay lenses (aquitards) in a highly stratified sand and clay coastal geology to depths of up to 300 feet. The sensor suite used in this work includes time-domain electromagnetic induction (TDEM) and near surface seismic techniques. During this first phase of the project, enhancements to the acquisition and processing of TDEM data were studied, by use of simulated data, to assess improvements for the detection of thin clay layers. Secondly, studies were made of the use of compressional wave and shear wave seismic reflection data by using state-of-the-art high frequency vibrator technology. Finally, a newly developed processing technique, called ''data fusion,'' was implemented to process the geophysical data, and to incorporate a mathematical model of the subsurface strata. Examples are given of the results when applied to real seismic data collected at Hanford, WA, and for simulated data based on the geology of the Savannah River Site

  7. Preoperative magnetic resonance and intraoperative ultrasound fusion imaging for real-time neuronavigation in brain tumor surgery.

    Science.gov (United States)

    Prada, F; Del Bene, M; Mattei, L; Lodigiani, L; DeBeni, S; Kolev, V; Vetrano, I; Solbiati, L; Sakas, G; DiMeco, F

    2015-04-01

    Brain shift and tissue deformation during surgery for intracranial lesions are the main actual limitations of neuro-navigation (NN), which currently relies mainly on preoperative imaging. Ultrasound (US), being a real-time imaging modality, is becoming progressively more widespread during neurosurgical procedures, but most neurosurgeons, trained on axial computed tomography (CT) and magnetic resonance imaging (MRI) slices, lack specific US training and have difficulties recognizing anatomic structures with the same confidence as in preoperative imaging. Therefore real-time intraoperative fusion imaging (FI) between preoperative imaging and intraoperative ultrasound (ioUS) for virtual navigation (VN) is highly desirable. We describe our procedure for real-time navigation during surgery for different cerebral lesions. We performed fusion imaging with virtual navigation for patients undergoing surgery for brain lesion removal using an ultrasound-based real-time neuro-navigation system that fuses intraoperative cerebral ultrasound with preoperative MRI and simultaneously displays an MRI slice coplanar to an ioUS image. 58 patients underwent surgery at our institution for intracranial lesion removal with image guidance using a US system equipped with fusion imaging for neuro-navigation. In all cases the initial (external) registration error obtained by the corresponding anatomical landmark procedure was below 2 mm and the craniotomy was correctly placed. The transdural window gave satisfactory US image quality and the lesion was always detectable and measurable on both axes. Brain shift/deformation correction has been successfully employed in 42 cases to restore the co-registration during surgery. The accuracy of ioUS/MRI fusion/overlapping was confirmed intraoperatively under direct visualization of anatomic landmarks and the error was surgery and is less expensive and time-consuming than other intraoperative imaging techniques, offering high precision and

  8. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    Science.gov (United States)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  9. Heavy-Ion Fusion Mechanism and Predictions of Super-Heavy Elements Production

    International Nuclear Information System (INIS)

    Abe, Yasuhisa; Shen Caiwan; Boilley, David; Giraud, Bertrand G.; Kosenko, Grigory

    2009-01-01

    Fusion process is shown to firstly form largely deformed mono-nucleus and then to undergo diffusion in two-dimensions with the radial and mass-asymmetry degrees of freedom. Examples of prediction of residue cross sections are given for the elements with Z = 117 and 118.

  10. The impact of image fusion in resolving discrepant findings between FDG-PET and MRI/CT in patients with gynaecological cancers

    International Nuclear Information System (INIS)

    Tsai, Cheng-Chien; Kao, Pan-Fu; Yen, Tzu-Chen; Tsai, Chien-Sheng; Hong, Ji-Hong; Ng, Koon-Kwan; Lai, Chyong-Huey; Chang, Ting-Chang; Hsueh, Swei

    2003-01-01

    This study was performed to prospectively investigate the impact of image fusion in resolving discrepant findings between fluorine-18 fluorodeoxyglucose positron emission tomography (FDG-PET) and magnetic resonance imaging (MRI) or X-ray computed tomography (CT) in patients with gynaecological cancers. Discrepant findings were defined as lesions where the difference between the FDG-PET and MRI/CT images was assigned a value of at least 2 on a 5-point probability scale. The FDG-PET and MRI/CT images were taken within 1 month of each other. Image fusion between FDG-PET and CT was performed by automatic registration between the two images. During an 18-month period, 34 malignant lesions and seven benign lesions from 32 patients who had undergone either surgical excision or a CT-guided histopathological investigation were included for analysis. Among these cases, image fusion was most frequently required to determine the nature and/or the extent of abdominal and pelvic lesions (28/41, 68%), especially as regards peritoneal seeding (8/41, 20%). Image fusion was most useful in providing better localisation for biopsy (16/41, 39%) and in discriminating between lesions with pathological versus physiological FDG uptake (12/41, 29%). Image fusion changed the original diagnosis based on MRI/CT alone in 9/41 lesions (22%), and the original diagnosis based on FDG-PET alone in 5/41 lesions (12%). It led to alteration of treatment planning (surgery or radiotherapy) in seven of the 32 patients (22%). In patients with gynaecological cancers, the technique of image fusion is helpful in discriminating the nature of FDG-avid lesions, in effectively localising lesions for CT-guided biopsy and in providing better surgical or radiotherapy planning. (orig.)

  11. Solving the problem of imaging resolution: stochastic multi-scale image fusion

    Science.gov (United States)

    Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril

    2016-04-01

    rocks) and RFBR grant 15-34-20989 (data fusion). References: 1. Karsanina, M.V., Gerke, K.M., Skvortsova, E.B., Mallants, D. Universal spatial correlation functions for describing and reconstructing soil microstructure. PLoS ONE 10(5): e0126515 (2015). 2. Gerke, K.M., Karsanina, M.V., Mallants, D. Universal stochastic multiscale image fusion: an example application for shale rock. Scientific Reports 5: 15880 (2015). 3. Gerke, K.M., Karsanina, M.V., Vasilyev, R.V., Mallants, D. Improving pattern reconstruction using correlation functions computed in directions. Europhys. Lett. 106(6), 66002 (2014). 4. Gerke, K.M., Karsanina, M.V. Improving stochastic reconstructions by weighting correlation functions in an objective function. Europhys. Lett. 111, 56002 (2015).

  12. The Ship Movement Trajectory Prediction Algorithm Using Navigational Data Fusion.

    Science.gov (United States)

    Borkowski, Piotr

    2017-06-20

    It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.

  13. The Ship Movement Trajectory Prediction Algorithm Using Navigational Data Fusion

    Directory of Open Access Journals (Sweden)

    Piotr Borkowski

    2017-06-01

    Full Text Available It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship’s current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.

  14. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  15. Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.

    Science.gov (United States)

    Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2016-08-01

    Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  16. High Level Information Fusion (HLIF) with nested fusion loops

    Science.gov (United States)

    Woodley, Robert; Gosnell, Michael; Fischer, Amber

    2013-05-01

    Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information. Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial developments, numerous models of information fusion have emerged, hoping to better capture the human-centric process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling, and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and repurposed data in a cohesive manner. FURNACE supports analyst's efforts to develop situation models, threat models, and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence markets.

  17. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    Science.gov (United States)

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety.

  18. SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Xinzheng

    2017-10-01

    Full Text Available In this paper, we present a Synthetic Aperture Radar (SAR image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR database,and they demonstrate the effectiveness of the proposed approach.

  19. Multi-Modality Registration And Fusion Of Medical Image Data

    International Nuclear Information System (INIS)

    Kassak, P.; Vencko, D.; Cerovsky, I.

    2008-01-01

    Digitalisation of health care providing facilities allows US to maximize the usage of digital data from one patient obtained by various modalities. Complex view on to the problem can be achieved from the site of morphology as well as functionality. Multi-modal registration and fusion of medical image data is one of the examples that provides improved insight and allows more precise approach and treatment. (author)

  20. Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition.

    Science.gov (United States)

    Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Şahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E; Fenyö, Eva Maria

    2014-08-30

    Standardized techniques to detect HIV-neutralizing antibody responses are of great importance in the search for an HIV vaccine. Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay. Neutralization of virus particles is measured as a reduction in the number of fluorescent plaques, and inhibition of cell-cell fusion as a reduction in plaque area. We found neutralization strength to be a significant factor in the ability of virus to form syncytia. Further, we introduce the inhibitory concentration of plaque area reduction (ICpar) as an additional measure of antiviral activity, i.e. fusion inhibition. We present an automated image based high-throughput, high-content HIV plaque reduction assay. This allows, for the first time, simultaneous evaluation of neutralization and inhibition of cell-cell fusion within the same assay, by quantifying the reduction in number of plaques and mean plaque area, respectively. Inhibition of cell-to-cell fusion requires higher quantities of inhibitory reagent than inhibition of virus neutralization.

  1. A Novel Fusion-Based Ship Detection Method from Pol-SAR Images

    Directory of Open Access Journals (Sweden)

    Wenguang Wang

    2015-09-01

    Full Text Available A novel fusion-based ship detection method from polarimetric Synthetic Aperture Radar (Pol-SAR images is proposed in this paper. After feature extraction and constant false alarm rate (CFAR detection, the detection results of HH channel, diplane scattering by Pauli decomposition and helical factor by Barnes decomposition are fused together. The confirmed targets and potential target pixels can be obtained after the fusion process. Using the difference degree of the target, potential target pixels can be classified. The fusion-based ship detection method works accurately by utilizing three different features comprehensively. The result of applying the technique to measured Airborne Synthetic Radar (AIRSAR data shows that the novel detection method can achieve better performance in both ship’s detection and ship’s shape preservation compared to the result of K-means clustering method and the Notch Filter method.

  2. Episodic aphasia associated with tumor active multiple sclerosis: a correlative SPECT study utilising image fusion

    International Nuclear Information System (INIS)

    Roff, G.; Campbell, A.; Lawn, N.; Henderson, A.; McCarthy, M.; Lenzo, N.

    2003-01-01

    Full text: Cerebral perfusion imaging is a common technique to assess cerebral perfusion and metabolism. It can complement anatomical imaging in assessing a number of neurological conditions. At times it can better define the clinical manifestations of a disease process than anatomical imaging alone. We present a clinical case whereby cerebral SPECT imaging helped define the physiological reason for intermittent aphasia in a patient with tumor active multiple sclerotic white matter plaques. Cerebral SPECT studies were performed during a period of aphasia and when the patient had recovered. We utilised subtraction analyses and image fusion techniques to better define the changes seen on SPECT. We discuss the neuroanatomical relationship of aphasia and the automatic fusion technique that allows accurate co-registration of the MRI and SPECT data. Copyright (2003) The Australian and New Zealand Society of Nuclear Medicine Inc

  3. [Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].

    Science.gov (United States)

    Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T

    2003-10-01

    Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.

  4. Are Fusion Transcripts in Relapsed/Metastatic Head and Neck Cancer Patients Predictive of Response to Anti-EGFR Therapies?

    Directory of Open Access Journals (Sweden)

    Paolo Bossi

    2017-01-01

    Full Text Available Prediction of benefit from combined chemotherapy and the antiepidermal growth factor receptor cetuximab is a not yet solved question in head and neck squamous cell carcinoma (HNSCC. In a selected series of 14 long progression-free survival (PFS and 26 short PFS patients by whole gene and microRNA expression analysis, we developed a model potentially predictive of cetuximab sensitivity. To better decipher the “omics” profile of our patients, we detected transcript fusions by RNA-seq through a Pan-Cancer panel targeting 1385 cancer genes. Twenty-seven different fusion transcripts, involving mRNA and long noncoding RNA (lncRNA, were identified. The majority of fusions (81% were intrachromosomal, and 24 patients (60% harbor at least one of them. The presence/absence of fusions and the presence of more than one fusion were not related to outcome, while the lncRNA-containing fusions resulted enriched in long PFS patients (P=0.0027. The CD274-PDCD1LG2 fusion was present in 7/14 short PFS patients harboring fusions and was absent in long PFS patients (P=0.0188. Among the short PFS patients, those harboring this fusion had the worst outcome (P=0.0172 and increased K-RAS activation (P=0.00147. The associations between HNSCC patient’s outcome following cetuximab treatment and lncRNA-containing fusions or the CD274-PDCD1LG2 fusion deserve validation in prospective clinical trials.

  5. PET-CT imaging fusion in the assessment of head and neck carcinoma

    International Nuclear Information System (INIS)

    Santos, Denise Takehana dos; Chojniak, Rubens; Lima, Eduardo Nobrega Pereira; Cavalcanti, Marcelo Gusmao Paraiso

    2006-01-01

    Objective: The authors have established a methodological approach to evaluate head and neck squamous cell carcinoma aiming at identifying and distinguishing high metabolic activity inside the lesion, combining in a single examination, functional, metabolic and morphological data simultaneously acquired by means of different non-dedicated positron emission tomography (PET)-computed tomography (CT) device. Materials and Methods: The study population included 17 patients with head and neck squamous cell carcinoma submitted to a non-dedicated 18 F-FDG-PET imaging at Department of Diagnostic Imaging of Hospital do Cancer, Sao Paulo, SP, Brazil. CT and 18 F-FDG-PET images were simultaneously acquired in a non-dedicated device. The original data were transferred to an independent workstation by means of the Entegra 2 NT software to generate PET-CT imaging fusion. Results: The findings were defined as positive in the presence of a well defined focal area of increased radiopharmaceutical uptake in regions not related with the normal biodistribution of the tracer. Conclusion: The fusion of simultaneously acquired images in a single examination ( 18 F-FDGPET and CT) has allowed the topographic-metabolic mapping of the lesion as well as the localization of high metabolic activity areas inside the tumor, indicating recidivation or metastasis and widening the array of alternatives for radiotherapy or surgical planning. (author)

  6. An efficient multiple exposure image fusion in JPEG domain

    Science.gov (United States)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  7. Automatically Identifying Fusion Events between GLUT4 Storage Vesicles and the Plasma Membrane in TIRF Microscopy Image Sequences

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2015-01-01

    Full Text Available Quantitative analysis of the dynamic behavior about membrane-bound secretory vesicles has proven to be important in biological research. This paper proposes a novel approach to automatically identify the elusive fusion events between VAMP2-pHluorin labeled GLUT4 storage vesicles (GSVs and the plasma membrane. The differentiation is implemented to detect the initiation of fusion events by modified forward subtraction of consecutive frames in the TIRFM image sequence. Spatially connected pixels in difference images brighter than a specified adaptive threshold are grouped into a distinct fusion spot. The vesicles are located at the intensity-weighted centroid of their fusion spots. To reveal the true in vivo nature of a fusion event, 2D Gaussian fitting for the fusion spot is used to derive the intensity-weighted centroid and the spot size during the fusion process. The fusion event and its termination can be determined according to the change of spot size. The method is evaluated on real experiment data with ground truth annotated by expert cell biologists. The evaluation results show that it can achieve relatively high accuracy comparing favorably to the manual analysis, yet at a small fraction of time.

  8. The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna

    NARCIS (Netherlands)

    Weimar Acerbi, F.; Clevers, J.G.P.W.; Schaepman, M.E.

    2006-01-01

    Multi-sensor image fusion using the wavelet approach provides a conceptual framework for the improvement of the spatial resolution with minimal distortion of the spectral content of the source image. This paper assesses whether images with a large ratio of spatial resolution can be fused, and

  9. Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images.

    Science.gov (United States)

    Favazza, Christopher P; Gorny, Krzysztof R; Callstrom, Matthew R; Kurup, Anil N; Washburn, Michael; Trester, Pamela S; Fowler, Charles L; Hangiandreou, Nicholas J

    2018-05-21

    We present the development of a two-component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto-segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center-of-mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm-measured inter-marker spacings and actual separation distances were 0.53 ± 0.36 mm. "Proof-of-concept" automatic MR-US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath-hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  10. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain

    Science.gov (United States)

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-01-01

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505

  11. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    Science.gov (United States)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  12. Improving lung cancer prognosis assessment by incorporating synthetic minority oversampling technique and score fusion method

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Shiju [School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China and School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States); Qian, Wei [Department of Electrical and Computer Engineering, University of Texas, El Paso, Texas 79968 and Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang 110819 (China); Guan, Yubao [Department of Radiology, Guangzhou Medical University, Guangzhou 510182 (China); Zheng, Bin, E-mail: Bin.Zheng-1@ou.edu [School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States)

    2016-06-15

    Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initially computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.

  13. Improving lung cancer prognosis assessment by incorporating synthetic minority oversampling technique and score fusion method

    International Nuclear Information System (INIS)

    Yan, Shiju; Qian, Wei; Guan, Yubao; Zheng, Bin

    2016-01-01

    Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initially computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.

  14. Fusion of different modalities of imaging the fist

    International Nuclear Information System (INIS)

    Verdenet, J.; Garbuio, P.; Runge, M.; Cardot, J.C.

    1997-01-01

    The standard radiographical pictures are not able always to bring out the fracture of one of the fist bones. In an early study it was shown that 40% of patients presenting a suspicion of fracture and in which the radio- image was normal, have had a fracture confirmed with quantification by MRI and scintigraphy. The last one does not allow to specify the localization and consequently we developed a code to fusion entirely automatically the radiologic image and the scintigraphic image using no external marker. The code has been installed on a PC and uses the Matlab environment. Starting from the histogram processing the contours are individualized on the interpolated radio- and scinti-images. For matching there are 3 freedom degrees: one of rotation and 2 of translation (in x and y axes). The internal axes of the forearm was chosen to effect the rotation and translation. The forehand thickness, identical for each modality, allows to match properly the images. We have obtained an anatomic image on which the contour and the hyper-fixating zones of the scintigraphy are added. On a set of 100 examinations we observed 38 fractures while the difference between a fracture of the scaphoid and of another fist bone is confirmed in 93% of cases

  15. Predicted mineral melt formation by BCURA Coal Sample Bank coals: Variation with atmosphere and comparison with reported ash fusion test data

    Energy Technology Data Exchange (ETDEWEB)

    D. Thompson [University of Sheffield (United Kingdom). Department of Engineering Materials

    2010-08-15

    The thermodynamic equilibrium phases formed under ash fusion test and excess air combustion conditions by 30 coals of the BCURA Coal Sample Bank have been predicted from 1100 to 2000 K using the MTDATA computational suite and the MTOX database for silicate melts and associated phases. Predicted speciation and degree of melting varied widely from coal to coal. Melting under an ash fusion test atmosphere of CO{sub 2}:H{sub 2} 1:1 was essentially the same as under excess air combustion conditions for some coals, and markedly different for others. For those ashes which flowed below the fusion test maximum temperature of 1773 K flow coincided with 75-100% melting in most cases. Flow at low predicted melt formation (46%) for one coal cannot be attributed to any one cause. The difference between predicted fusion behaviours under excess air and fusion test atmospheres becomes greater with decreasing silica and alumina, and increasing iron, calcium and alkali metal content in the coal mineral. 22 refs., 7 figs., 3 tabs.

  16. Clinical study of the image fusion between CT and FDG-PET in the head and neck region

    International Nuclear Information System (INIS)

    Shozushima, Masanori; Moriguchi, Hitoshi; Shoji, Satoru; Sakamaki, Kimio; Ishikawa, Yoshihito; Kudo, Keigo; Satoh, Masanobu

    1999-01-01

    Image fusion using PET and CT from the head and neck region was performed with the use of external markers on 7 patients with squamous cell carcinoma. The purpose of this study was to examine a resultant error and the clinical usefulness of image fusion. Patients had primary lesions of the tongue, the maxillary gingiva or the maxillary sinus. All patients underwent PET with FDG and CT to detect tumor sites. Of these 7 patients, diagnostic images and the clinical observation found 6 cases of regional lymph node metastasis of the neck. To ensure the anatomical detail of the PET images, small radioactive markers were placed on the philtrum and below both earlobes. The PET image and CT image were then overlapped on a computer. The image fusion of PET and CT was successfully performed on all patients. The superposition error of this method was examined between the PET and CT images. The accuracy of fit measured as the mean distance between the PET and CT image was in the range of 2-5 mm. PET-CT superimposed images produced an increase in the localization of tumor FDG uptake and localized FDG uptake on the palatine tonsils. The marker system described here for the alignment of PET and CT images can be used on a routine basis without the invasive fixation of external markers, and also improve the management and follow up on patients with head and neck carcinoma. (author)

  17. Combined FDG PET/CT imaging for restaging of colorectal cancer patients: impact of image fusion on staging accuracy

    International Nuclear Information System (INIS)

    Strunk, H.; Jaeger, U.; Flacke, S.; Hortling, N.; Bucerius, J.; Joe, A.; Reinhardt, M.; Palmedo, H.

    2005-01-01

    Purpose: To evaluate the diagnostic impact of positron emission tomography (PET) with fluorine-18-labeled deoxy-D-glucose (FDG) combined with non-contrast computed tomography (CT) as PET-CT modality in restaging colorectal cancer patients. Material and methods: In this retrospective study, 29 consecutive patients with histologically proven colorectal cancer (17 female, 12 male, aged 51-76 years) underwent whole body scans in one session on a dual modality PET-CT system (Siemens Biograph) 90 min. after i.v. administration of 370 MBq 18 F-FDG. The CT imaging was performed with 40 mAs, 130 kV, slice-thickness 5 mm and without i.v. contrast administration. PET and CT images were reconstructed with a slice-thickness of 5 mm in coronal, sagittal and transverse planes. During a first step of analysis, PET and CT images were scored blinded and independently by a group of two nuclear medicine physicians and a group of two radiologists, respectively. For this purpose, a five-point-scale was used. The second step of data-analysis consisted of a consensus reading by both groups. During the consensus reading, first a virtual (meaning mental) fusion of PET and CT images and afterwards the 'real' fusion (meaning coregistered) PET-CT images were also scored with the same scale. The imaging results were compared with histopathology findings and the course of disease during further follow-up. Results: The total number of malignant lesions detected with the combined PET/CT were 86. For FDG-PET alone it was n=68, and for CT alone n=65. Comparing PET-CT and PET, concordance was found in 81 of 104 lesions. Discrepancies predominantly occurred in the lung, where PET alone often showed true positive results in lymph nodes and soft tissue masses, where CT often was false negative. Comparing mental fusion and 'real' co-registered images, concordance was found in 94 of 104 lesions. In 13 lesions or, respectively, in 7 of 29 patients, a relevant information was gathered using fused images

  18. Prediction of new tightly bound-states of H2+(d2+) and ''cold fusion''-experiments

    International Nuclear Information System (INIS)

    Barut, A.O.

    1989-06-01

    It is suggested that in the ''cold-fusion'' experiments of Fleischmann and Pons new tightly-bound molecular states of D 2 + are formed with binding energies predicted to be of the order of 50 keV accounting for the heat released without appreciable fusion. Other tests of the suggested mechanism are proposed and the derivation of the new energy levels is given. (author). 3 refs

  19. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution.

    Science.gov (United States)

    Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian

    2017-03-01

    It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.

  20. Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain

    International Nuclear Information System (INIS)

    Fiedler, E.; Platsch, G.; Schwarz, A.; Schmiedehausen, K.; Kuwert, T.; Tomandl, B.; Huk, W.; Rupprecht, Th.; Rahn, N.

    2003-01-01

    Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and method: In 32 patients regional cerebral blood flow was measured using 99m Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3 D-T1 w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use. (orig.) [de

  1. Remote Sensing Image Fusion at the Segment Level Using a Spatially-Weighted Approach: Applications for Land Cover Spectral Analysis and Mapping

    Directory of Open Access Journals (Sweden)

    Brian Johnson

    2015-01-01

    Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.

  2. Ring Fusion of Fisheye Images Based on Corner Detection Algorithm for Around View Monitoring System of Intelligent Driving

    Directory of Open Access Journals (Sweden)

    Jianhui Zhao

    2018-01-01

    Full Text Available In order to improve the visual effect of the around view monitor (AVM, we propose a novel ring fusion method to reduce the brightness difference among fisheye images and achieve a smooth transition around stitching seam. Firstly, an integrated corner detection is proposed to automatically detect corner points for image registration. Then, we use equalization processing to reduce the brightness among images. And we match the color of images according to the ring fusion method. Finally, we use distance weight to blend images around stitching seam. Through this algorithm, we have made a Matlab toolbox for image blending. 100% of the required corner is accurately and fully automatically detected. The transition around the stitching seam is very smooth, with no obvious stitching trace.

  3. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    Science.gov (United States)

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  4. Computer-based image analysis in radiological diagnostics and image-guided therapy: 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    International Nuclear Information System (INIS)

    Beier, J.

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the software systems presented cover the majority of image processing applications necessary in radiology and were entirely developed, implemented and validated in the clinical routine of a university medical school. (orig.) [de

  5. Moving target detection based on temporal-spatial information fusion for infrared image sequences

    Science.gov (United States)

    Toing, Wu-qin; Xiong, Jin-yu; Zeng, An-jun; Wu, Xiao-ping; Xu, Hao-peng

    2009-07-01

    Moving target detection and localization is one of the most fundamental tasks in visual surveillance. In this paper, through analyzing the advantages and disadvantages of the traditional approaches about moving target detection, a novel approach based on temporal-spatial information fusion is proposed for moving target detection. The proposed method combines the spatial feature in single frame and the temporal properties within multiple frames of an image sequence of moving target. First, the method uses the spatial image segmentation for target separation from background and uses the local temporal variance for extracting targets and wiping off the trail artifact. Second, the logical "and" operator is used to fuse the temporal and spatial information. In the end, to the fusion image sequence, the morphological filtering and blob analysis are used to acquire exact moving target. The algorithm not only requires minimal computation and memory but also quickly adapts to the change of background and environment. Comparing with other methods, such as the KDE, the Mixture of K Gaussians, etc., the simulation results show the proposed method has better validity and higher adaptive for moving target detection, especially in infrared image sequences with complex illumination change, noise change, and so on.

  6. Image fusion between whole body FDG PET images and whole body MRI images using a full-automatic mutual information-based multimodality image registration software

    International Nuclear Information System (INIS)

    Uchida, Yoshitaka; Nakano, Yoshitada; Fujibuchi, Toshiou; Isobe, Tomoko; Kazama, Toshiki; Ito, Hisao

    2006-01-01

    We attempted image fusion between whole body PET and whole body MRI of thirty patients using a full-automatic mutual information (MI) -based multimodality image registration software and evaluated accuracy of this method and impact of the coregistrated imaging on diagnostic accuracy. For 25 of 30 fused images in body area, translating gaps were within 6 mm in all axes and rotating gaps were within 2 degrees around all axes. In head and neck area, considerably much gaps caused by difference of head inclination at imaging occurred in 16 patients, however these gaps were able to decrease by fused separately. In 6 patients, diagnostic accuracy using PET/MRI fused images was superior compared by PET image alone. This work shows that whole body FDG PET images and whole body MRI images can be automatically fused using MI-based multimodality image registration software accurately and this technique can add useful information when evaluating FDG PET images. (author)

  7. [Image fusion of gated-SPECT and CT angiography in coronary artery disease. Importance of anatomic-functional correlation].

    Science.gov (United States)

    Nazarena Pizzi, M; Aguadé Bruix, S; Cuéllar Calabria, H; Aliaga, V; Candell Riera, J

    2010-01-01

    A 77-year old patient was admitted for acute coronary syndrome without ST elevation. His risk was stratified using the myocardial perfusion gated SPECT, mild inferior ischemia being observed. Thus, medical therapy was optimized and the patient was discharged. He continued with exertional dyspnea so a coronary CT angiography was performed. It revealed severe lesions in the proximal RCA. SPECT-CT fusion images correlated the myocardial perfusion defect with a posterior descending artery from the RCA, in a co-dominant coronary area. Subsequently, cardiac catheterism was indicated for his treatment. The current use of image fusion studies is limited to patients in whom it is difficult to attribute a perfusion defect to a specific coronary artery. In our patient, the fusion images helped to distinguish between the RCA and the circumflex artery as the culprit artery of ischemia. Copyright © 2010 Elsevier España, S.L. y SEMNIM. All rights reserved.

  8. Evaluation of electrode position in deep brain stimulation by image fusion (MRI and CT)

    Energy Technology Data Exchange (ETDEWEB)

    Barnaure, I.; Lovblad, K.O.; Vargas, M.I. [Geneva University Hospital, Department of Neuroradiology, Geneva 14 (Switzerland); Pollak, P.; Horvath, J.; Boex, C.; Burkhard, P. [Geneva University Hospital, Department of Neurology, Geneva (Switzerland); Momjian, S. [Geneva University Hospital, Department of Neurosurgery, Geneva (Switzerland); Remuinan, J. [Geneva University Hospital, Department of Radiology, Geneva (Switzerland)

    2015-09-15

    Imaging has an essential role in the evaluation of correct positioning of electrodes implanted for deep brain stimulation (DBS). Although MRI offers superior anatomic visualization of target sites, there are safety concerns in patients with implanted material; imaging guidelines are inconsistent and vary. The fusion of postoperative CT with preoperative MRI images can be an alternative for the assessment of electrode positioning. The purpose of this study was to assess the accuracy of measurements realized on fused images (acquired without a stereotactic frame) using a manufacturer-provided software. Data from 23 Parkinson's disease patients who underwent bilateral electrode placement for subthalamic nucleus (STN) DBS were acquired. Preoperative high-resolution T2-weighted sequences at 3 T, and postoperative CT series were fused using a commercially available software. Electrode tip position was measured on the obtained images in three directions (in relation to the midline, the AC-PC line and an AC-PC line orthogonal, respectively) and assessed in relation to measures realized on postoperative 3D T1 images acquired at 1.5 T. Mean differences between measures carried out on fused images and on postoperative MRI lay between 0.17 and 0.97 mm. Fusion of CT and MRI images provides a safe and fast technique for postoperative assessment of electrode position in DBS. (orig.)

  9. Information fusion in signal and image processing major probabilistic and non-probabilistic numerical approaches

    CERN Document Server

    Bloch, Isabelle

    2010-01-01

    The area of information fusion has grown considerably during the last few years, leading to a rapid and impressive evolution. In such fast-moving times, it is important to take stock of the changes that have occurred. As such, this books offers an overview of the general principles and specificities of information fusion in signal and image processing, as well as covering the main numerical methods (probabilistic approaches, fuzzy sets and possibility theory and belief functions).

  10. The registration accuracy analysis of different CT-MRI imaging fusion method in brain tumor

    International Nuclear Information System (INIS)

    Lu Jie; Yin Yong; Shao Qian; Zhang Zicheng; Chen Jinhu; Chen Zhaoqiu

    2010-01-01

    Objective: To find an effective CT-MRI image fusion protocol in brain tumor by analyzing the registration accuracy of different methods. Methods: The simulation CT scan and MRI T 1 WI imaging of 10 brain tumor patients obtained with same position were registered by Tris-Axes landmark ,Tris-Axes landmark + manual adjustment, mutual information and mutual information + manual adjustment method. The clinical tumor volume (CTV) were contoured on both CT and MRI images respectively. The accuracy of image fusion was assessed by the mean distance of five bone markers (d 1-5 ), central position of CTV (d CTV ) the percentage of CTV overlap (P CT-MRI ) between CT and MRI images. The difference between different methods was analyzed by Freedman M non-parameter test. Results: The difference of the means d1-5 between the Tris-Axes landmark,Tris-Axes landmark plus manual adjustment,mutual information and mutual information plus manual adjustment methods were 0.28 cm ±0.12 cm, 0.15 cm ±0.02 cm, 0.25 cm± 0.19 cm, 0.10 cm ± 0.06 cm, (M = 14.41, P = 0.002). the means d CTV were 0.59 cm ± 0.28 cm, 0.60 cm± 0.32 cm, 0.58 cm ± 0.39 cm, 0.42 cm± 0.30 cm (M = 9.72, P = 0.021), the means P CT-MRI were 0.69% ±0.18%, 0.68% ±0.16%, 0.66% ±0.17%, 0.74% ±0.14% (M =14.82, P=0.002), respectively. Conclusions: Mutual information plus manual adjustment registration method was the preferable fusion method for brain tumor patients. (authors)

  11. A novel fusion imaging system for endoscopic ultrasound

    DEFF Research Database (Denmark)

    Gruionu, Lucian Gheorghe; Saftoiu, Adrian; Gruionu, Gabriel

    2016-01-01

    BACKGROUND AND OBJECTIVE: Navigation of a flexible endoscopic ultrasound (EUS) probe inside the gastrointestinal (GI) tract is problematic due to the small window size and complex anatomy. The goal of the present study was to test the feasibility of a novel fusion imaging (FI) system which uses...... time was 24.6 ± 6.6 min, while the time to reach the clinical target was 8.7 ± 4.2 min. CONCLUSIONS: The FI system is feasible for clinical use, and can reduce the learning curve for EUS procedures and improve navigation and targeting in difficult anatomic locations....

  12. Single photon emission computed tomography/spiral computed tomography fusion imaging for the diagnosis of bone metastasis in patients with known cancer

    International Nuclear Information System (INIS)

    Zhao, Zhen; Li, Lin; Li, Fanglan; Zhao, Lixia

    2010-01-01

    To evaluate single photon emission computed tomography (SPECT)/spiral computed tomography (CT) fusion imaging for the diagnosis of bone metastasis in patients with known cancer and to compare the diagnostic efficacy of SPECT/CT fusion imaging with that of SPECT alone and with SPECT + CT. One hundred forty-one bone lesions of 125 cancer patients (with nonspecific bone findings on bone scintigraphy) were investigated in the study. SPECT, CT, and SPECT/CT fusion images were acquired simultaneously. All images were interpreted independently by two experienced nuclear medicine physicians. In cases of discrepancy, consensus was obtained by a joint reading. The final diagnosis was based on biopsy proof and radiologic follow-up over at least 1 year. The final diagnosis revealed 63 malignant bone lesions and 78 benign lesions. The diagnostic sensitivity of SPECT, SPECT + CT, and SPECT/CT fusion imaging for malignant lesions was 82.5%, 93.7%, and 98.4%, respectively. Specificity was 66.7%, 80.8%, and 93.6%, respectively. Accuracy was 73.8%, 86.5%, and 95.7%, respectively. The specificity and accuracy of SPECT/CT fusion imaging for the diagnosis malignant bone lesions were significantly higher than those of SPECT alone and of SPECT + CT (P 2 = 9.855, P = 0.002). The numbers of equivocal lesions were 37, 18, and 5 for SPECT, SPECT + CT, and SPECT/CT fusion imaging, respectively, and 29.7% (11/37), 27.8% (5/18), and 20.0% (1/5) of lesions were confirmed to be malignant by radiologic follow-up over at least 1 year. SPECT/spiral CT is particularly valuable for the diagnosis of bone metastasis in patients with known cancer by providing precise anatomic localization and detailed morphologic characteristics. (orig.)

  13. Pros and Cons of 3D Image Fusion in Endovascular Aortic Repair: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Goudeketting, Seline R; Heinen, Stefan G H; Ünlü, Çağdaş; van den Heuvel, Daniel A F; de Vries, Jean-Paul P M; van Strijen, Marco J; Sailer, Anna M

    2017-08-01

    To systematically review and meta-analyze the added value of 3-dimensional (3D) image fusion technology in endovascular aortic repair for its potential to reduce contrast media volume, radiation dose, procedure time, and fluoroscopy time. Electronic databases were systematically searched for studies published between January 2010 and March 2016 that included a control group describing 3D fusion imaging in endovascular aortic procedures. Two independent reviewers assessed the methodological quality of the included studies and extracted data on iodinated contrast volume, radiation dose, procedure time, and fluoroscopy time. The contrast use for standard and complex endovascular aortic repairs (fenestrated, branched, and chimney) were pooled using a random-effects model; outcomes are reported as the mean difference with 95% confidence intervals (CIs). Seven studies, 5 retrospective and 2 prospective, involving 921 patients were selected for analysis. The methodological quality of the studies was moderate (median 17, range 15-18). The use of fusion imaging led to an estimated mean reduction in iodinated contrast of 40.1 mL (95% CI 16.4 to 63.7, p=0.002) for standard procedures and a mean 70.7 mL (95% CI 44.8 to 96.6, p<0.001) for complex repairs. Secondary outcome measures were not pooled because of potential bias in nonrandomized data, but radiation doses, procedure times, and fluoroscopy times were lower, although not always significantly, in the fusion group in 6 of the 7 studies. Compared with the control group, 3D fusion imaging is associated with a significant reduction in the volume of contrast employed for standard and complex endovascular aortic procedures, which can be particularly important in patients with renal failure. Radiation doses, procedure times, and fluoroscopy times were reduced when 3D fusion was used.

  14. Use of data fusion to optimize contaminant transport predictions

    International Nuclear Information System (INIS)

    Eeckhout, E. van

    1997-10-01

    The original data fusion workstation, as envisioned by Coleman Research Corp., was constructed under funding from DOE (EM-50) in the early 1990s. The intent was to demonstrate the viability of fusion and analysis of data from various types of sensors for waste site characterization, but primarily geophysical. This overall concept changed over time and evolved more towards hydrogeological (groundwater) data fusion after some initial geophysical fusion work focused at Coleman. This initial geophysical fusion platform was tested at Hanford and Fernald, and the later hydrogeological fusion work has been demonstrated at Pantex, Savannah River, the US Army Letterkenny Depot, a DoD Massachusetts site and a DoD California site. The hydrogeologic data fusion package has been spun off to a company named Fusion and Control Technology, Inc. This package is called the Hydrological Fusion And Control Tool (Hydro-FACT) and is being sold as a product that links with the software package, MS-VMS (MODFLOW-SURFACT Visual Modeling System), sold by HydroGeoLogic, Inc. MODFLOW is a USGS development, and is in the public domain. Since the government paid for the data fusion development at Coleman, the government and their contractors have access to the data fusion technology in this hydrogeologic package for certain computer platforms, but would probably have to hire FACT (Fusion and Control Technology, Inc.,) and/or HydroGeoLogic for some level of software and services. Further discussion in this report will concentrate on the hydrogeologic fusion module that is being sold as Hydro-FACT, which can be linked with MS-VMS

  15. Simultaneous usage of pinhole and penumbral apertures for imaging small scale neutron sources from inertial confinement fusion experiments.

    Science.gov (United States)

    Guler, N; Volegov, P; Danly, C R; Grim, G P; Merrill, F E; Wilde, C H

    2012-10-01

    Inertial confinement fusion experiments at the National Ignition Facility are designed to understand the basic principles of creating self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic capsules. The neutron imaging diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by observing neutron images in two different energy bands for primary (13-17 MeV) and down-scattered (6-12 MeV) neutrons. From this, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. These experiments provide small sources with high yield neutron flux. An aperture design that includes an array of pinholes and penumbral apertures has provided the opportunity to image the same source with two different techniques. This allows for an evaluation of these different aperture designs and reconstruction algorithms.

  16. APPLICATION OF FUSION WITH SAR AND OPTICAL IMAGES IN LAND USE CLASSIFICATION BASED ON SVM

    Directory of Open Access Journals (Sweden)

    C. Bao

    2012-07-01

    Full Text Available As the increment of remote sensing data with multi-space resolution, multi-spectral resolution and multi-source, data fusion technologies have been widely used in geological fields. Synthetic Aperture Radar (SAR and optical camera are two most common sensors presently. The multi-spectral optical images express spectral features of ground objects, while SAR images express backscatter information. Accuracy of the image classification could be effectively improved fusing the two kinds of images. In this paper, Terra SAR-X images and ALOS multi-spectral images were fused for land use classification. After preprocess such as geometric rectification, radiometric rectification noise suppression and so on, the two kind images were fused, and then SVM model identification method was used for land use classification. Two different fusion methods were used, one is joining SAR image into multi-spectral images as one band, and the other is direct fusing the two kind images. The former one can raise the resolution and reserve the texture information, and the latter can reserve spectral feature information and improve capability of identifying different features. The experiment results showed that accuracy of classification using fused images is better than only using multi-spectral images. Accuracy of classification about roads, habitation and water bodies was significantly improved. Compared to traditional classification method, the method of this paper for fused images with SVM classifier could achieve better results in identifying complicated land use classes, especially for small pieces ground features.

  17. Offline fusion of co-registered intravascular ultrasound and frequency domain optical coherence tomography images for the analysis of human atherosclerotic plaques

    DEFF Research Database (Denmark)

    Räber, Lorenz; Heo, Jung Ho; Radu, Maria D

    2012-01-01

    To demonstrate the feasibility and potential usefulness of an offline fusion of matched optical coherence tomography (OCT) and intravascular ultrasound (IVUS)/virtual histology (IVUS-VH) images.......To demonstrate the feasibility and potential usefulness of an offline fusion of matched optical coherence tomography (OCT) and intravascular ultrasound (IVUS)/virtual histology (IVUS-VH) images....

  18. Inhibition of the Hantavirus Fusion Process by Predicted Domain III and Stem Peptides from Glycoprotein Gc.

    Science.gov (United States)

    Barriga, Gonzalo P; Villalón-Letelier, Fernando; Márquez, Chantal L; Bignon, Eduardo A; Acuña, Rodrigo; Ross, Breyan H; Monasterio, Octavio; Mardones, Gonzalo A; Vidal, Simon E; Tischler, Nicole D

    2016-07-01

    Hantaviruses can cause hantavirus pulmonary syndrome or hemorrhagic fever with renal syndrome in humans. To enter cells, hantaviruses fuse their envelope membrane with host cell membranes. Previously, we have shown that the Gc envelope glycoprotein is the viral fusion protein sharing characteristics with class II fusion proteins. The ectodomain of class II fusion proteins is composed of three domains connected by a stem region to a transmembrane anchor in the viral envelope. These fusion proteins can be inhibited through exogenous fusion protein fragments spanning domain III (DIII) and the stem region. Such fragments are thought to interact with the core of the fusion protein trimer during the transition from its pre-fusion to its post-fusion conformation. Based on our previous homology model structure for Gc from Andes hantavirus (ANDV), here we predicted and generated recombinant DIII and stem peptides to test whether these fragments inhibit hantavirus membrane fusion and cell entry. Recombinant ANDV DIII was soluble, presented disulfide bridges and beta-sheet secondary structure, supporting the in silico model. Using DIII and the C-terminal part of the stem region, the infection of cells by ANDV was blocked up to 60% when fusion of ANDV occurred within the endosomal route, and up to 95% when fusion occurred with the plasma membrane. Furthermore, the fragments impaired ANDV glycoprotein-mediated cell-cell fusion, and cross-inhibited the fusion mediated by the glycoproteins from Puumala virus (PUUV). The Gc fragments interfered in ANDV cell entry by preventing membrane hemifusion and pore formation, retaining Gc in a non-resistant homotrimer stage, as described for DIII and stem peptide inhibitors of class II fusion proteins. Collectively, our results demonstrate that hantavirus Gc shares not only structural, but also mechanistic similarity with class II viral fusion proteins, and will hopefully help in developing novel therapeutic strategies against hantaviruses.

  19. Liver function assessment using 99mTc-GSA single-photon emission computed tomography (SPECT)/CT fusion imaging in hilar bile duct cancer: A retrospective study.

    Science.gov (United States)

    Sumiyoshi, Tatsuaki; Shima, Yasuo; Okabayashi, Takehiro; Kozuki, Akihito; Hata, Yasuhiro; Noda, Yoshihiro; Kouno, Michihiko; Miyagawa, Kazuyuki; Tokorodani, Ryotaro; Saisaka, Yuichi; Tokumaru, Teppei; Nakamura, Toshio; Morita, Sojiro

    2016-07-01

    The objective of this study was to determine the utility of Tc-99m-diethylenetriamine-penta-acetic acid-galactosyl human serum albumin ((99m)Tc-GSA) single-photon emission computed tomography (SPECT)/CT fusion imaging for posthepatectomy remnant liver function assessment in hilar bile duct cancer patients. Thirty hilar bile duct cancer patients who underwent major hepatectomy with extrahepatic bile duct resection were retrospectively analyzed. Indocyanine green plasma clearance rate (KICG) value and estimated KICG by (99m)Tc-GSA scintigraphy (KGSA) and volumetric and functional rates of future remnant liver by (99m)Tc-GSA SPECT/CT fusion imaging were used to evaluate preoperative whole liver function and posthepatectomy remnant liver function, respectively. Remnant (rem) KICG (= KICG × volumetric rate) and remKGSA (= KGSA × functional rate) were used to predict future remnant liver function; major hepatectomy was considered unsafe for values liver were significantly higher than volumetric rates (median: 0.54 vs 0.46; P liver failure and mortality did not occur in the patients for whom hepatectomy was considered unsafe based on remKICG. remKGSA showed a stronger correlation with postoperative prothrombin time activity than remKICG. (99m)Tc-GSA SPECT/CT fusion imaging enables accurate assessment of future remnant liver function and suitability for hepatectomy in hilar bile duct cancer patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Haefner, Andrew; Mihailescu, Lucian [Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States)

    2015-11-11

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  1. Assessment of ion kinetic effects in shock-driven inertial confinement fusion implosions using fusion burn imaging

    International Nuclear Information System (INIS)

    Rosenberg, M. J.; Séguin, F. H.; Rinderknecht, H. G.; Zylstra, A. B.; Li, C. K.; Sio, H.; Johnson, M. Gatu; Frenje, J. A.; Petrasso, R. D.; Amendt, P. A.; Wilks, S. C.; Pino, J.; Atzeni, S.; Hoffman, N. M.; Kagan, G.; Molvig, K.; Glebov, V. Yu.; Stoeckl, C.; Seka, W.; Marshall, F. J.

    2015-01-01

    The significance and nature of ion kinetic effects in D 3 He-filled, shock-driven inertial confinement fusion implosions are assessed through measurements of fusion burn profiles. Over this series of experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number, N K ) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatially resolved measurements of the fusion burn are used to examine kinetic ion transport effects in greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison of measured and simulated burn profiles shows that models including ion transport effects are able to better match the experimental results. In implosions characterized by large Knudsen numbers (N K  ∼ 3), the fusion burn profiles predicted by hydrodynamics simulations that exclude ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that includes a model of ion diffusion is able to qualitatively match the measured profile shapes. Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the observed trends, though further refinement of the models is needed for a more complete and quantitative understanding of ion kinetic effects

  2. Assessment of ion kinetic effects in shock-driven inertial confinement fusion implosions using fusion burn imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, M. J., E-mail: mros@lle.rochester.edu; Séguin, F. H.; Rinderknecht, H. G.; Zylstra, A. B.; Li, C. K.; Sio, H.; Johnson, M. Gatu; Frenje, J. A.; Petrasso, R. D. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Amendt, P. A.; Wilks, S. C.; Pino, J. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Atzeni, S. [Dipartimento SBAI, Università di Roma “La Sapienza” and CNISM, Via A. Scarpa 14-16, I-00161 Roma (Italy); Hoffman, N. M.; Kagan, G.; Molvig, K. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Glebov, V. Yu.; Stoeckl, C.; Seka, W.; Marshall, F. J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States); and others

    2015-06-15

    The significance and nature of ion kinetic effects in D{sup 3}He-filled, shock-driven inertial confinement fusion implosions are assessed through measurements of fusion burn profiles. Over this series of experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number, N{sub K}) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatially resolved measurements of the fusion burn are used to examine kinetic ion transport effects in greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison of measured and simulated burn profiles shows that models including ion transport effects are able to better match the experimental results. In implosions characterized by large Knudsen numbers (N{sub K} ∼ 3), the fusion burn profiles predicted by hydrodynamics simulations that exclude ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that includes a model of ion diffusion is able to qualitatively match the measured profile shapes. Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the observed trends, though further refinement of the models is needed for a more complete and quantitative understanding of ion kinetic effects.

  3. System for automatic x-ray-image analysis, measurement, and sorting of laser fusion targets

    International Nuclear Information System (INIS)

    Singleton, R.M.; Perkins, D.E.; Willenborg, D.L.

    1980-01-01

    This paper describes the Automatic X-Ray Image Analysis and Sorting (AXIAS) system which is designed to analyze and measure x-ray images of opaque hollow microspheres used as laser fusion targets. The x-ray images are first recorded on a high resolution film plate. The AXIAS system then digitizes and processes the images to accurately measure the target parameters and defects. The primary goals of the AXIAS system are: to provide extremely accurate and rapid measurements, to engineer a practical system for a routine production environment and to furnish the capability of automatically measuring an array of images for sorting and selection

  4. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    Science.gov (United States)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  5. Cost-Effectiveness Comparison of Imaging-Guided Prostate Biopsy Techniques: Systematic Transrectal Ultrasound, Direct In-Bore MRI, and Image Fusion

    NARCIS (Netherlands)

    Venderink, W.; Govers, T.M.; Rooij, M. de; Futterer, J.J.; Sedelaar, J.P.M.

    2017-01-01

    OBJECTIVE: Three commonly used prostate biopsy approaches are systematic transrectal ultrasound guided, direct in-bore MRI guided, and image fusion guided. The aim of this study was to calculate which strategy is most cost-effective. MATERIALS AND METHODS: A decision tree and Markov model were

  6. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging

    Science.gov (United States)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2017-03-01

    Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.

  7. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  8. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion.

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-29

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  9. FZUImageReg: A toolbox for medical image registration and dose fusion in cervical cancer radiotherapy.

    Directory of Open Access Journals (Sweden)

    Qinquan Gao

    Full Text Available The combination external-beam radiotherapy and high-dose-rate brachytherapy is a standard form of treatment for patients with locally advanced uterine cervical cancer. Personalized radiotherapy in cervical cancer requires efficient and accurate dose planning and assessment across these types of treatment. To achieve radiation dose assessment, accurate mapping of the dose distribution from HDR-BT onto EBRT is extremely important. However, few systems can achieve robust dose fusion and determine the accumulated dose distribution during the entire course of treatment. We have therefore developed a toolbox (FZUImageReg, which is a user-friendly dose fusion system based on hybrid image registration for radiation dose assessment in cervical cancer radiotherapy. The main part of the software consists of a collection of medical image registration algorithms and a modular design with a user-friendly interface, which allows users to quickly configure, test, monitor, and compare different registration methods for a specific application. Owing to the large deformation, the direct application of conventional state-of-the-art image registration methods is not sufficient for the accurate alignment of EBRT and HDR-BT images. To solve this problem, a multi-phase non-rigid registration method using local landmark-based free-form deformation is proposed for locally large deformation between EBRT and HDR-BT images, followed by intensity-based free-form deformation. With the transformation, the software also provides a dose mapping function according to the deformation field. The total dose distribution during the entire course of treatment can then be presented. Experimental results clearly show that the proposed system can achieve accurate registration between EBRT and HDR-BT images and provide radiation dose warping and fusion results for dose assessment in cervical cancer radiotherapy in terms of high accuracy and efficiency.

  10. Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

    International Nuclear Information System (INIS)

    Peng, Matthew Jian-qiao; Ju Xiangyang; Khambay, Balvinder S.; Ayoub, Ashraf F.; Chen, Chin-Tu; Bai Bo

    2012-01-01

    Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18 F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.

  11. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation. (paper)

  12. Quasi-elastic scattering an alternative tool for mapping the fusion barriers for heavy-ion induced fusion reaction

    International Nuclear Information System (INIS)

    Behera, B.R.

    2016-01-01

    Heavy element synthesis through heavy-ion induced fusion reaction is an active field in contemporary nuclear physics. Exact knowledge of fusion barrier is one of the essential parameters for planning any experiments for heavy element production. Theoretically there are many models available to predict the exact barrier. Though these models are successful for predicting the fusion of medium mass nuclei, it somehow fails for predicting the exact location of barrier for fusion of heavy nuclei. Experimental determination of barrier for such reactions is required for future experiments for the synthesis of heavy elements. Traditionally fusion barrier is determined taking a double derivative of fusion excitation function. However, such method is difficult in case of fusion of heavy nuclei due to its very low fusion/capture cross section and its experimental complications. Alternatively fusion barrier can be determined by measuring the quasi-elastic cross section at backward angles. This method can be applied for determining the fusion barrier for the fusion of heavy nuclei. Experimental determination of fusion barrier by different methods and comparison of the fusion excitation function and quasi-elastic scattering methods for the determination of fusion barrier are reviewed. At IUAC, New Delhi recently a program has been started for the measurement of fusion barrier through quasi-elastic scattering methods. The experimental facility and the first results of the experiments carried out with this facility are presented. (author)

  13. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  14. Fusion imaging of computed tomographic pulmonary angiography and SPECT ventilation/perfusion scintigraphy: initial experience and potential benefit

    International Nuclear Information System (INIS)

    Harris, Benjamin; Bailey, Dale; Roach, Paul; Bailey, Elizabeth; King, Gregory

    2007-01-01

    The objective of this study was to examine the feasibility of fusing ventilation and perfusion data from single-photon emission computed tomography (SPECT) ventilation perfusion (V/Q) scintigraphy together with computed tomographic pulmonary angiography (CTPA) data. We sought to determine the accuracy of this fusion process. In addition, we correlated the findings of this technique with the final clinical diagnosis. Thirty consecutive patients (17 female, 13 male) who had undergone both CTPA and SPECT V/Q scintigraphy during their admission for investigation of potential pulmonary embolism were identified retrospectively. Image datasets from these two modalities were co-registered and fused using commercial software. Accuracy of the fusion process was determined subjectively by correlation between modalities of the anatomical boundaries and co-existent pleuro-parenchymal abnormalities. In all 30 cases, SPECT V/Q images were accurately fused with CTPA images. An automated registration algorithm was sufficient alone in 23 cases (77%). Additional linear z-axis scaling was applied in seven cases. There was accurate topographical co-localisation of vascular, parenchymal and pleural disease on the fused images. Nine patients who had positive CTPA performed as an initial investigation had co-localised perfusion defects on the subsequent fused CTPA/SPECT images. Three of the 11 V/Q scans initially reported as intermediate could be reinterpreted as low probability owing to co-localisation of defects with parenchymal or pleural pathology. Accurate fusion of SPECT V/Q scintigraphy to CTPA images is possible. This technique may be clinically useful in patients who have non-diagnostic initial investigations or in whom corroborative imaging is sought. (orig.)

  15. Visualization of intracranial vessel anatomy using high resolution MRI and a simple image fusion technique

    International Nuclear Information System (INIS)

    Nasel, C.

    2005-01-01

    A new technique for fusion and 3D viewing of high resolution magnetic resonance (MR) angiography and morphological MR sequences is reported. Scanning and image fusion was possible within 20 min on a standard 1.5 T MR-scanner. The procedure was successfully performed in 10 consecutive cases with excellent visualization of wall and luminal aspects of the intracranial segments of the internal carotid artery, the vertebrobasilar system and the anterior, middle and posterior cerebral artery

  16. Visualization of intracranial vessel anatomy using high resolution MRI and a simple image fusion technique

    Energy Technology Data Exchange (ETDEWEB)

    Nasel, C. [Division of Neuroradiology, Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, A-1090 Vienna (Austria)]. E-mail: christian.nasel@perfusion.at

    2005-04-01

    A new technique for fusion and 3D viewing of high resolution magnetic resonance (MR) angiography and morphological MR sequences is reported. Scanning and image fusion was possible within 20 min on a standard 1.5 T MR-scanner. The procedure was successfully performed in 10 consecutive cases with excellent visualization of wall and luminal aspects of the intracranial segments of the internal carotid artery, the vertebrobasilar system and the anterior, middle and posterior cerebral artery.

  17. Development of a High Resolution X-Ray Imaging Crystal Spectrometer for Measurement of Ion-Temperature and Rotation-Velocity Profiles in Fusion Energy Research Plasmas

    International Nuclear Information System (INIS)

    Hill, K.W.; Bitter, M.L.; Broennimann, Ch.; Eikenberry, E.F.; Ince-Cushman, A.; Lee, S.G.; Rice, J.E.; Scott, S.; Barnsley, R.

    2008-01-01

    A new imaging high resolution x-ray crystal spectrometer (XCS) has been developed to measure continuous profiles of ion temperature and rotation velocity in fusion plasmas. Following proof-of-principle tests on the Alcator C-Mod tokamak and the NSTX spherical tokamak, and successful testing of a new silicon, pixilated detector with 1MHz count rate capability per pixel, an imaging XCS is being designed to measure full profiles of T i and ν φ on C-Mod. The imaging XCS design has also been adopted for ITER. Ion-temperature uncertainty and minimum measurable rotation velocity are calculated for the C-Mod spectrometer. The affects of x-ray and nuclear-radiation background on the measurement uncertainties are calculated to predict performance on ITER

  18. Autofocus and fusion using nonlinear correlation

    International Nuclear Information System (INIS)

    Cabazos-Marín, Alma Rocío; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-01-01

    In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y) w to define the V w vector. The spectrum of the vector FV w is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV 1 nonlinear correlation vector, of the reference image, with each other FV W images in the stack. In addition, fusion is performed with a subset of selected images f(x, y) SBF like the images with best focus measurement. Fusion creates a new improved image f(x, y) F with the selection of pixels of higher intensity

  19. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  20. TU-AB-202-11: Tumor Segmentation by Fusion of Multi-Tracer PET Images Using Copula Based Statistical Methods

    International Nuclear Information System (INIS)

    Lapuyade-Lahorgue, J; Ruan, S; Li, H; Vera, P

    2016-01-01

    Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model is used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume

  1. Multi-sensor radiation detection, imaging, and fusion

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, Kai [Department of Nuclear Engineering, University of California, Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-01-01

    Glenn Knoll was one of the leaders in the field of radiation detection and measurements and shaped this field through his outstanding scientific and technical contributions, as a teacher, his personality, and his textbook. His Radiation Detection and Measurement book guided me in my studies and is now the textbook in my classes in the Department of Nuclear Engineering at UC Berkeley. In the spirit of Glenn, I will provide an overview of our activities at the Berkeley Applied Nuclear Physics program reflecting some of the breadth of radiation detection technologies and their applications ranging from fundamental studies in physics to biomedical imaging and to nuclear security. I will conclude with a discussion of our Berkeley Radwatch and Resilient Communities activities as a result of the events at the Dai-ichi nuclear power plant in Fukushima, Japan more than 4 years ago. - Highlights: • .Electron-tracking based gamma-ray momentum reconstruction. • .3D volumetric and 3D scene fusion gamma-ray imaging. • .Nuclear Street View integrates and associates nuclear radiation features with specific objects in the environment. • Institute for Resilient Communities combines science, education, and communities to minimize impact of disastrous events.

  2. Advances in multi-sensor data fusion: algorithms and applications.

    Science.gov (United States)

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.

  3. Elastic Versus Rigid Image Registration in Magnetic Resonance Imaging-transrectal Ultrasound Fusion Prostate Biopsy: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Venderink, Wulphert; de Rooij, Maarten; Sedelaar, J P Michiel; Huisman, Henkjan J; Fütterer, Jurgen J

    2016-07-29

    The main difference between the available magnetic resonance imaging-transrectal ultrasound (MRI-TRUS) fusion platforms for prostate biopsy is the method of image registration being either rigid or elastic. As elastic registration compensates for possible deformation caused by the introduction of an ultrasound probe for example, it is expected that it would perform better than rigid registration. The aim of this meta-analysis is to compare rigid with elastic registration by calculating the detection odds ratio (OR) for both subgroups. The detection OR is defined as the ratio of the odds of detecting clinically significant prostate cancer (csPCa) by MRI-TRUS fusion biopsy compared with systematic TRUS biopsy. Secondary objectives were the OR for any PCa and the OR after pooling both registration techniques. The electronic databases PubMed, Embase, and Cochrane were systematically searched for relevant studies according to the Preferred Reporting Items for Systematic Review and Meta-analysis Statement. Studies comparing MRI-TRUS fusion and systematic TRUS-guided biopsies in the same patient were included. The quality assessment of included studies was performed using the Quality Assessment of Diagnostic Accuracy Studies version 2. Eleven papers describing elastic and 10 describing rigid registration were included. Meta-analysis showed an OR of csPCa for elastic and rigid registration of 1.45 (95% confidence interval [CI]: 1.21-1.73, pimaging-transrectal ultrasound fusion systems which vary in their method of compensating for prostate deformation. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  4. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    Directory of Open Access Journals (Sweden)

    Lu Ping-ping

    2014-06-01

    Full Text Available Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extraction algorithm based on information fusion, which considers ratio and direction information, is also proposed. By processing Radon transformation, main road directions can be extracted. Cross interferences can be suppressed, and the road continuity can then be improved by the main direction alignment and secondary road extraction. The HJ-1-C SAR image acquired in Wuhan, China was used to evaluate the proposed method. The experimental results show good performance with correctness (80.5% and quality (70.1% when applied to a SAR image with complex content.

  5. Far-infrared imaging arrays for fusion plasma density and magnetic field measurements

    International Nuclear Information System (INIS)

    Neikirk, D.P.; Rutledge, D.B.

    1982-01-01

    Far-infrared imaging detector arrays are required for the determination of density and local magnetic field in fusion plasmas. Analytic calculations point out the difficulties with simple printed slot and dipole antennas on ungrounded substrates for use in submillimeter wave imaging arrays because of trapped surface waves. This is followed by a discussion of the use of substrate-lens coupling to eliminate the associated trapped surface modes responsible for their poor performance. This integrates well with a modified bow-tie antenna and permits diffraction-limited imaging. Arrays using bismuth microbolometers have been successfully fabricated and tested at 1222μm and 119μm. A 100 channel pilot experiment designed for the UCLA Microtor tokamak is described. (author)

  6. Autofocus and fusion using nonlinear correlation

    Energy Technology Data Exchange (ETDEWEB)

    Cabazos-Marín, Alma Rocío [Departamento de Investigación en Física, Universidad de Sonora (UNISON), Luis Encinas y Rosales S/N, Col. Centro, Hermosillo, Sonora C.P. 8300 (Mexico); Álvarez-Borrego, Josué, E-mail: josue@cicese.mx [Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), División de Física Aplicada, Departamento de Óptica, Carretera Ensenada-Tijuana No. 3918, Fraccionamiento Zona Playitas, Ensena (Mexico); Coronel-Beltrán, Ángel [Departamento de Investigación en Física, Universidad de Sonora (UNISON), Luis Encinas y Rosales S/N, Col. Centro, Hermosillo, Sonora C,.P. 83000 (Mexico)

    2014-10-06

    In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y){sub w} to define the V{sub w} vector. The spectrum of the vector FV{sub w} is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV{sub 1} nonlinear correlation vector, of the reference image, with each other FV{sub W} images in the stack. In addition, fusion is performed with a subset of selected images f(x, y){sub SBF} like the images with best focus measurement. Fusion creates a new improved image f(x, y){sub F} with the selection of pixels of higher intensity.

  7. A 2D Wigner Distribution-based multisize windows technique for image fusion

    Czech Academy of Sciences Publication Activity Database

    Redondo, R.; Fischer, S.; Šroubek, Filip; Cristóbal, G.

    2008-01-01

    Roč. 19, č. 1 (2008), s. 12-19 ISSN 1047-3203 R&D Projects: GA ČR GA102/04/0155; GA ČR GA202/05/0242 Grant - others:CSIC(CZ) 2004CZ0009 Institutional research plan: CEZ:AV0Z10750506 Keywords : Wigner distribution * image fusion * multifocus Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.342, year: 2008

  8. Clinical significance of MRI/18F-FDG PET fusion imaging of the spinal cord in patients with cervical compressive myelopathy

    International Nuclear Information System (INIS)

    Uchida, Kenzo; Nakajima, Hideaki; Watanabe, Shuji; Yoshida, Ai; Baba, Hisatoshi; Okazawa, Hidehiko; Kimura, Hirohiko; Kudo, Takashi

    2012-01-01

    18 F-FDG PET is used to investigate the metabolic activity of neural tissue. MRI is used to visualize morphological changes, but the relationship between intramedullary signal changes and clinical outcome remains controversial. The present study was designed to evaluate the use of 3-D MRI/ 18 F-FDG PET fusion imaging for defining intramedullary signal changes on MRI scans and local glucose metabolic rate measured on 18 F-FDG PET scans in relation to clinical outcome and prognosis. We studied 24 patients undergoing decompressive surgery for cervical compressive myelopathy. All patients underwent 3-D MRI and 18 F-FDG PET before surgery. Quantitative analysis of intramedullary signal changes on MRI scans included calculation of the signal intensity ratio (SIR) as the ratio between the increased lesional signal intensity and the signal intensity at the level of the C7/T1 disc. Using an Advantage workstation, the same slices of cervical 3-D MRI and 18 F-FDG PET images were fused. On the fused images, the maximal count of the lesion was adopted as the standardized uptake value (SUV max ). In a similar manner to SIR, the SUV ratio (SUVR) was also calculated. Neurological assessment was conducted using the Japanese Orthopedic Association (JOA) scoring system for cervical myelopathy. The SIR on T1-weighted (T1-W) images, but not SIR on T2-W images, was significantly correlated with preoperative JOA score and postoperative neurological improvement. Lesion SUV max was significantly correlated with SIR on T1-W images, but not with SIR on T2-W images, and also with postoperative neurological outcome. The SUVR correlated better than SIR on T1-W images and lesion SUV max with neurological improvement. Longer symptom duration was correlated negatively with SIR on T1-W images, positively with SIR on T2-W images, and negatively with SUV max . Our results suggest that low-intensity signal on T1-W images, but not on T2-W images, is correlated with a poor postoperative neurological

  9. Multimodality imaging: transfer and fusion of SPECT and MRI data

    International Nuclear Information System (INIS)

    Knesaurek, K.

    1994-01-01

    Image fusion is a technique which offers the best of both worlds. It unites the two basic types of medical images: functional body images(PET or SPECT scans), which provide physiological information, and structural images (CT or MRI), which provide an anatomic map of the body. Control-point based registration technique was developed and used. Tc-99m point sources were used as external markers in SPECT studies while, for MRI and CT imaging only anatomic landmarks were used as a control points. The MRI images were acquired on GE Signa 1.2 system and CT data on a GE 9800 scanner. SPECT studies were performed 1h after intravenous injection of the 740 MBq of the Tc-99m-HMPAO on the triple-headed TRIONIX gamma camera. B-spline and bilinear interpolation were used for the rotation, scaling and translation of the images. In the process of creation of a single composite image, in order to retain information from the individual images, MRI (or CT) image was scaled to one color range and a SPECT image to another. In some situations the MRI image was kept black-and-white while the SPECT image was pasted on top of it in 'opaque' mode. Most errors which propagate through the matching process are due to sample size, imperfection of the acquisition system, noise and interpolations used. Accuracy of the registration was investigated by SPECT-CT study performed on a phantom study. The results has shown that accuracy of the matching process is better, or at worse, equal to 2 mm. (author)

  10. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  11. Quantification of design margins and safety factors based on the prediction uncertainty in tritium production rate from fusion integral experiments of the USDOE/JAERI collaborative program on fusion blanket neutronics

    International Nuclear Information System (INIS)

    Youssef, M.Z.; Konno, C.; Maekawa, F.; Ikeda, Y.; Kosako, K.; Nakagawa, M.; Mori, T.; Maekawa, H.

    1995-01-01

    Several fusion integral experiments were performed within a collaboration between the USA and Japan on fusion breeder neutronics aimed at verifying the prediction accuracy of key neutronics parameters in a fusion reactor blanket based on current neutron transport codes and basic nuclear databases. The focus has been on the tritium production rate (TRP) as an important design parameter to resolve the issue of tritium self-sufficiency in a fusion reactor. In this paper, the calculational and experimental uncertainties (errors) in local TPR in each experiment performed i were interpolated and propagated to estimate the prediction uncertainty u i in the line-integrated TPR and its standard deviation σ i . The measured data are based on Li-glass and NE213 detectors. From the quantities u i and σ i , normalized density functions (NDFs) were constructed, considering all the experiments and their associated analyses performed independently by the UCLA and JAERI. Several statistical parameters were derived, including the mean prediction uncertainties u and the possible spread ±σ u around them. Design margins and safety factors were derived from these NDFs. Distinction was made between the results obtained by UCLA and JAERI and between calculational results based on the discrete ordinates and Monte Carlo methods. The prediction uncertainties, their standard deviations and the design margins and safety factors were derived for the line-integrated TPR from Li-6 T 6 , and Li-7 T 7 . These parameters were used to estimate the corresponding uncertainties and safety factor for the line-integrated TPR from natural lithium T n . (orig.)

  12. a Probability Model for Drought Prediction Using Fusion of Markov Chain and SAX Methods

    Science.gov (United States)

    Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.

    2017-09-01

    Drought is one of the most powerful natural disasters which are affected on different aspects of the environment. Most of the time this phenomenon is immense in the arid and semi-arid area. Monitoring and prediction the severity of the drought can be useful in the management of the natural disaster caused by drought. Many indices were used in predicting droughts such as SPI, VCI, and TVX. In this paper, based on three data sets (rainfall, NDVI, and land surface temperature) which are acquired from MODIS satellite imagery, time series of SPI, VCI, and TVX in time limited between winters 2000 to summer 2015 for the east region of Isfahan province were created. Using these indices and fusion of symbolic aggregation approximation and hidden Markov chain drought was predicted for fall 2015. For this purpose, at first, each time series was transformed into the set of quality data based on the state of drought (5 group) by using SAX algorithm then the probability matrix for the future state was created by using Markov hidden chain. The fall drought severity was predicted by fusion the probability matrix and state of drought severity in summer 2015. The prediction based on the likelihood for each state of drought includes severe drought, middle drought, normal drought, severe wet and middle wet. The analysis and experimental result from proposed algorithm show that the product of this algorithm is acceptable and the proposed algorithm is appropriate and efficient for predicting drought using remote sensor data.

  13. Imaging of dihydrofolate reductase fusion gene expression in xenografts of human liver metastases of colorectal cancer in living rats

    Energy Technology Data Exchange (ETDEWEB)

    Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata [Molecular Pharmacology and Therapeutics Program, Memorial Sloan-Kettering Cancer Center, New York, NY (United States); The Cancer Institute of New Jersey, Robert Wood Johnson Medical School/UMDNJ, 195 Little Albany Street, NJ 08903, New Brunswick (United States); Doubrovin, Mikhail; Blasberg, Ronald; Tjuvajev, Juri Gelovani [Department of Neurooncology, Memorial Sloan-Kettering Cancer Center, New York, NY (United States); Gusani, Niraj J.; Fong, Yuman [Department of Surgery, Memorial Sloan-Kettering Cancer Center, New York, NY (United States); Gade, Terence; Koutcher, Jason A. [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY (United States); Balatoni, Julius; Finn, Ronald [Radiochemistry/Cyclotron Core Facility, Memorial Sloan-Kettering Cancer Center, New York, NY (United States); Akhurst, Tim; Larson, Steven [Nuclear Medicine Service, Memorial Sloan-Kettering Cancer Center, New York, NY (United States)

    2003-09-01

    Radionuclide imaging has been demonstrated to be feasible to monitor transgene expression in vivo. We hypothesized that a potential application of this technique is to non-invasively detect in deep tissue, such as cancer cells metastatic to the liver, a specific molecular response following systemic drug treatment. Utilizing human colon adenocarcinoma cells derived from a patient's liver lesion we first developed a nude rat xenograft model for colorectal cancer metastatic to the liver. Expression of a dihydrofolate reductase-herpes simplex virus 1 thymidine kinase fusion (DHFR-HSV1 TK) transgene in the hepatic tumors was monitored in individual animals using the tracer [{sup 124}I]2'-fluoro-2'-deoxy-5-iodouracil-{beta}-d-arabinofuranoside (FIAU) and a small animal micro positron emission tomograph (microPET), while groups of rats were imaged using the tracer [{sup 131}I]FIAU and a clinical gamma camera. Growth of the human metastatic colorectal cancer cells in the rat liver was detected using magnetic resonance imaging and confirmed by surgical inspection. Single as well as multiple lesions of different sizes and sites were observed in the liver of the animals. Next, using a subset of rats bearing hepatic tumors, which were retrovirally bulk transduced to express the DHFR-HSV1 TK transgene, we imaged the fusion protein expression in the hepatic tumor of living rats using the tracer [{sup 124}I]FIAU and a microPET. The observed deep tissue signals were highly specific for the tumors expressing the DHFR-HSV1 TK fusion protein compared with parental untransduced tumors and other tissues as determined by gamma counting of tissue samples. A subsequent study used the tracer [{sup 131}I]FIAU and a gamma camera to monitor two groups of transduced hepatic tumor-bearing rats. Prior to imaging, one group was treated with trimetrexate to exploit DHFR-mediated upregulation of the fusion gene product. Imaging in the living animal as well as subsequent gamma

  14. Image fusion in open-architecture quality-oriented nuclear medicine and radiology departments

    Energy Technology Data Exchange (ETDEWEB)

    Pohjonen, H

    1997-12-31

    Imaging examinations of patients belong to the most widely used diagnostic procedures in hospitals. Multimodal digital imaging is becoming increasingly common in many fields of diagnosis and therapy planning. Patients are frequently examined with magnetic resonance imaging (MRI), X-ray computed tomography (CT) or ultrasound imaging (US) in addition to single photon (SPET) or positron emission tomography (PET). The aim of the study was to provide means for improving the quality of the whole imaging and viewing chain in nuclear medicine and radiology. The specific aims were: (1) to construct and test a model for a quality assurance system in radiology based on ISO standards, (2) to plan a Dicom based image network for fusion purposes using ATM and Ethernet technologies, (3) to test different segmentation methods in quantitative SPET, (4) to study and implement a registration and visualisation method for multimodal imaging, (5) to apply the developed method in selected clinical brain and abdominal images, and (6) to investigate the accuracy of the registration procedure for brain SPET and MRI 90 refs. The thesis includes also six previous publications by author

  15. Image fusion in open-architecture quality-oriented nuclear medicine and radiology departments

    Energy Technology Data Exchange (ETDEWEB)

    Pohjonen, H

    1998-12-31

    Imaging examinations of patients belong to the most widely used diagnostic procedures in hospitals. Multimodal digital imaging is becoming increasingly common in many fields of diagnosis and therapy planning. Patients are frequently examined with magnetic resonance imaging (MRI), X-ray computed tomography (CT) or ultrasound imaging (US) in addition to single photon (SPET) or positron emission tomography (PET). The aim of the study was to provide means for improving the quality of the whole imaging and viewing chain in nuclear medicine and radiology. The specific aims were: (1) to construct and test a model for a quality assurance system in radiology based on ISO standards, (2) to plan a Dicom based image network for fusion purposes using ATM and Ethernet technologies, (3) to test different segmentation methods in quantitative SPET, (4) to study and implement a registration and visualisation method for multimodal imaging, (5) to apply the developed method in selected clinical brain and abdominal images, and (6) to investigate the accuracy of the registration procedure for brain SPET and MRI 90 refs. The thesis includes also six previous publications by author

  16. Image fusion in open-architecture quality-oriented nuclear medicine and radiology departments

    International Nuclear Information System (INIS)

    Pohjonen, H.

    1997-01-01

    Imaging examinations of patients belong to the most widely used diagnostic procedures in hospitals. Multimodal digital imaging is becoming increasingly common in many fields of diagnosis and therapy planning. Patients are frequently examined with magnetic resonance imaging (MRI), X-ray computed tomography (CT) or ultrasound imaging (US) in addition to single photon (SPET) or positron emission tomography (PET). The aim of the study was to provide means for improving the quality of the whole imaging and viewing chain in nuclear medicine and radiology. The specific aims were: (1) to construct and test a model for a quality assurance system in radiology based on ISO standards, (2) to plan a Dicom based image network for fusion purposes using ATM and Ethernet technologies, (3) to test different segmentation methods in quantitative SPET, (4) to study and implement a registration and visualisation method for multimodal imaging, (5) to apply the developed method in selected clinical brain and abdominal images, and (6) to investigate the accuracy of the registration procedure for brain SPET and MRI

  17. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  18. Picosecond imaging of inertial confinement fusion plasmas using electron pulse-dilation

    Science.gov (United States)

    Hilsabeck, T. J.; Nagel, S. R.; Hares, J. D.; Kilkenny, J. D.; Bell, P. M.; Bradley, D. K.; Dymoke-Bradshaw, A. K. L.; Piston, K.; Chung, T. M.

    2017-02-01

    Laser driven inertial confinement fusion (ICF) plasmas typically have burn durations on the order of 100 ps. Time resolved imaging of the x-ray self emission during the hot spot formation is an important diagnostic tool which gives information on implosion symmetry, transient features and stagnation time. Traditional x-ray gated imagers for ICF use microchannel plate detectors to obtain gate widths of 40-100 ps. The development of electron pulse-dilation imaging has enabled a 10X improvement in temporal resolution over legacy instruments. In this technique, the incoming x-ray image is converted to electrons at a photocathode. The electrons are accelerated with a time-varying potential that leads to temporal expansion as the electron signal transits the tube. This expanded signal is recorded with a gated detector and the effective temporal resolution of the composite system can be as low as several picoseconds. An instrument based on this principle, known as the Dilation X-ray Imager (DIXI) has been constructed and fielded at the National Ignition Facility. Design features and experimental results from DIXI will be presented.

  19. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  20. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    International Nuclear Information System (INIS)

    Hao, Ming; Wang, Yanli; Bryant, Stephen H.

    2016-01-01

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  1. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  2. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    Science.gov (United States)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  3. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Directory of Open Access Journals (Sweden)

    Yansheng Li

    2016-08-01

    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  4. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    Science.gov (United States)

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  5. The ties that bind: genetic relatedness predicts the fission and fusion of social groups in wild African elephants.

    Science.gov (United States)

    Archie, Elizabeth A; Moss, Cynthia J; Alberts, Susan C

    2006-03-07

    Many social animals live in stable groups. In contrast, African savannah elephants (Loxodonta africana) live in unusually fluid, fission-fusion societies. That is, 'core' social groups are composed of predictable sets of individuals; however, over the course of hours or days, these groups may temporarily divide and reunite, or they may fuse with other social groups to form much larger social units. Here, we test the hypothesis that genetic relatedness predicts patterns of group fission and fusion among wild, female African elephants. Our study of a single Kenyan population spans 236 individuals in 45 core social groups, genotyped at 11 microsatellite and one mitochondrial DNA (mtDNA) locus. We found that genetic relatedness predicted group fission; adult females remained with their first order maternal relatives when core groups fissioned temporarily. Relatedness also predicted temporary fusion between social groups; core groups were more likely to fuse with each other when the oldest females in each group were genetic relatives. Groups that shared mtDNA haplotypes were also significantly more likely to fuse than groups that did not share mtDNA. Our results suggest that associations between core social groups persist for decades after the original maternal kin have died. We discuss these results in the context of kin selection and its possible role in the evolution of elephant sociality.

  6. Quicksilver: Fast predictive image registration - A deep learning approach.

    Science.gov (United States)

    Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc

    2017-09-01

    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Mapping Daily Evapotranspiration based on Spatiotemporal Fusion of ASTER and MODIS Images over Irrigated Agricultural Areas in the Heihe River Basin, Northwest China

    Science.gov (United States)

    Huang, C.; LI, Y.

    2017-12-01

    Continuous monitoring of daily evapotranspiration (ET) is crucial for allocating and managing water resources in irrigated agricultural areas in arid regions. In this study, continuous daily ET at a 90-m spatial resolution was estimated using the Surface Energy Balance System (SEBS) by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) images with high temporal resolution and Advanced Space-borne Thermal Emission Reflectance Radiometer (ASTER) images with high spatial resolution. The spatiotemporal characteristics of these sensors were obtained using the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The performance of this approach was validated over a heterogeneous oasis-desert region covered by cropland, residential, woodland, water, Gobi desert, sandy desert, desert steppe, and wetland areas using in situ observations from automatic meteorological systems (AMS) and eddy covariance (EC) systems in the middle reaches of the Heihe River Basin in Northwest China. The error introduced during the data fusion process based on STARFM is within an acceptable range for predicted LST at a 90-m spatial resolution. The surface energy fluxes estimated using SEBS based on predicted remotely sensed data that combined the spatiotemporal characteristics of MODIS and ASTER agree well with the surface energy fluxes observed using EC systems for all land cover types, especially for vegetated area with MAP values range from 9% to 15%, which are less than the uncertainty (18%) of the observed in this study area. Time series of daily ET modelled from SEBS were compared to that modelled from PT-JPL (one of Satellite-based Priestley-Taylor ET model) and observations from EC systems. SEBS performed generally better than PT-JPL for vegetated area, especially irrigated cropland with bias, RMSE, and MAP values of 0.29 mm/d, 0.75 mm/d, 13% at maize site, -0.33 mm/d, 0.81 mm/d, and 14% at vegetable sites.

  8. Dual modality CT/PET imaging in lung cancer staging

    International Nuclear Information System (INIS)

    Diaz, Gabriel A.

    2005-01-01

    Purpose: To compare the diagnostic capability of PET-HCT image fusion and helical computed tomography (HCT) for nodal and distant metastases detection in patients with lung cancer. Material and methods: Between February, 2003 and March, 2004 sixty-six consecutive lung cancer patients (45 men and 21 women, mean ages: 63 years old, range: 38 to 96 years old) who underwent HCT and PET-HCT fusion imaging were evaluated retrospectively. All patients had histological confirmation of lung cancer and a definitive diagnosis established on the basis of pathology results and/or clinical follow-up. Results: For global nodal staging (hilar and mediastinal) HCT showed a sensitivity, specificity, positive predictive value and negative predictive value of 72%, 47%, 62% and 58% respectively, versus 94%, 77%, 83% and 92% corresponding to PET-HCT examination. For assessment of advanced nodal stage (N3) PET-HCT showed values of 92%, 100%, 100% and 98% respectively. For detection of distant metastasis, HCT alone had values of 67%, 93%, 84% and 83% respectively versus 100%, 98%, 96% and 100% for the PET-HCT fusion imaging. In 20 (30%) patients under-staged or over-staged on the basis of HCT results, PET-HCT allowed accurate staging. Conclusions: PET-HCT fusion imaging was more effective than HCT alone for nodal and distant metastasis detection and oncology staging. (author)

  9. Net-based data transfer and automatic image fusion of metabolic (PET) and morphologic (CT/MRI) images for radiosurgical planning of brain tumors

    International Nuclear Information System (INIS)

    Baum, R.P.; Przetak, C.; Schmuecking, M.; Klener, G.; Surber, G.; Hamm, K.

    2002-01-01

    Aim: The main purpose of radiosurgery in comparison to conventional radiotherapy of brain tumors is to reach a higher radiation dose in the tumor and sparing normal brain tissue as much as possible. To reach this aim it is crucial to define the target volume extremely accurately. For this purpose, MRI and CT examinations are used for radiotherapy planning. In certain cases, however, metabolic information obtained by positron emission tomography (PET) may be useful to achieve a higher therapeutic accuracy by sparing important brain structures. This can be the case, i.e. in low grade astrocytomas for exact delineation of vital tumor as well as in differentiating scaring tissue from tumor recurrence and edema after operation. For this purpose, radiolabeled aminoacid analogues (e.g. C-11 methionine) and recently O-2-[ 18 F] Fluorethyl-L-Tyrosin (F-18 FET) have been introduced as PET tracers to detect the area of highest tumor metabolism which allows to obtain additional information as compared to FDG-PET that reflects the local glucose metabolism. In these cases, anatomical and metabolic data have to be combined with the technique of digital image fusion to exactly determine the target volume, the isodoses and the area where the highest dose has to be applied. Materials: We have set up a data transfer from the PET Center of the Zentralklinik Bad Berka with the Department of Stereotactic Radiation at the Helios Klinik Erfurt (distance approx. 25 km) to enable this kind of image fusion. PET data (ECAT EXACT 47, Siemens/CTI) are transferred to a workstation (NOVALIS) in the Dept. of Stereotactic Radiation to be co-registered with the CT or MRI data of the patient. All PET images are in DICOM format (obtained by using a HERMES computer, Nuclear Diagnostics, Sweden) and can easily be introduced into the NOVALIS workstation. The software uses the optimation of mutual information to achieve a good fusion quality. Sometimes manual corrections have to be performed to get an

  10. Advancing of Land Surface Temperature Retrieval Using Extreme Learning Machine and Spatio-Temporal Adaptive Data Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2015-04-01

    Full Text Available As a critical variable to characterize the biophysical processes in ecological environment, and as a key indicator in the surface energy balance, evapotranspiration and urban heat islands, Land Surface Temperature (LST retrieved from Thermal Infra-Red (TIR images at both high temporal and spatial resolution is in urgent need. However, due to the limitations of the existing satellite sensors, there is no earth observation which can obtain TIR at detailed spatial- and temporal-resolution simultaneously. Thus, several attempts of image fusion by blending the TIR data from high temporal resolution sensor with data from high spatial resolution sensor have been studied. This paper presents a novel data fusion method by integrating image fusion and spatio-temporal fusion techniques, for deriving LST datasets at 30 m spatial resolution from daily MODIS image and Landsat ETM+ images. The Landsat ETM+ TIR data were firstly enhanced based on extreme learning machine (ELM algorithm using neural network regression model, from 60 m to 30 m resolution. Then, the MODIS LST and enhanced Landsat ETM+ TIR data were fused by Spatio-temporal Adaptive Data Fusion Algorithm for Temperature mapping (SADFAT in order to derive high resolution synthetic data. The synthetic images were evaluated for both testing and simulated satellite images. The average difference (AD and absolute average difference (AAD are smaller than 1.7 K, where the correlation coefficient (CC and root-mean-square error (RMSE are 0.755 and 1.824, respectively, showing that the proposed method enhances the spatial resolution of the predicted LST images and preserves the spectral information at the same time.

  11. Fusion Simulation Program

    International Nuclear Information System (INIS)

    Greenwald, Martin

    2011-01-01

    Many others in the fusion energy and advanced scientific computing communities participated in the development of this plan. The core planning team is grateful for their important contributions. This summary is meant as a quick overview the Fusion Simulation Program's (FSP's) purpose and intentions. There are several additional documents referenced within this one and all are supplemental or flow down from this Program Plan. The overall science goal of the DOE Office of Fusion Energy Sciences (FES) Fusion Simulation Program (FSP) is to develop predictive simulation capability for magnetically confined fusion plasmas at an unprecedented level of integration and fidelity. This will directly support and enable effective U.S. participation in International Thermonuclear Experimental Reactor (ITER) research and the overall mission of delivering practical fusion energy. The FSP will address a rich set of scientific issues together with experimental programs, producing validated integrated physics results. This is very well aligned with the mission of the ITER Organization to coordinate with its members the integrated modeling and control of fusion plasmas, including benchmarking and validation activities. (1). Initial FSP research will focus on two critical Integrated Science Application (ISA) areas: ISA1, the plasma edge; and ISA2, whole device modeling (WDM) including disruption avoidance. The first of these problems involves the narrow plasma boundary layer and its complex interactions with the plasma core and the surrounding material wall. The second requires development of a computationally tractable, but comprehensive model that describes all equilibrium and dynamic processes at a sufficient level of detail to provide useful prediction of the temporal evolution of fusion plasma experiments. The initial driver for the whole device model will be prediction and avoidance of discharge-terminating disruptions, especially at high performance, which are a critical

  12. Development of a novel fluorescent imaging probe for tumor hypoxia by use of a fusion protein with oxygen-dependent degradation domain of HIF-1α

    Science.gov (United States)

    Tanaka, Shotaro; Kizaka-Kondoh, Shinae; Harada, Hiroshi; Hiraoka, Masahiro

    2007-02-01

    More malignant tumors contain more hypoxic regions. In hypoxic tumor cells, expression of a series of hypoxiaresponsive genes related to malignant phenotype such as angiogenesis and metastasis are induced. Hypoxia-inducible factor-1 (HIF-1) is a master transcriptional activator of such genes, and thus imaging of hypoxic tumor cells where HIF-1 is active, is important in cancer therapy. We have been developing PTD-ODD fusion proteins, which contain protein transduction domain (PTD) and the VHL-mediated protein destruction motif in oxygen-dependent degradation (ODD) domain of HIF-1 alpha subunit (HIF-1α). Thus PTD-ODD fusion proteins can be delivered to any tissue in vivo through PTD function and specifically stabilized in hypoxic cells through ODD function. To investigate if PTD-ODD fusion protein can be applied to construct hypoxia-specific imaging probes, we first constructed a fluorescent probe because optical imaging enable us to evaluate a probe easily, quickly and economically in a small animal. We first construct a model fusion porein PTD-ODD-EGFP-Cy5.5 named POEC, which is PTD-ODD protein fused with EGFP for in vitro imaging and stabilization of fusion protein, and conjugated with a near-infrared dye Cy5.5. This probe is designed to be degraded in normoxic cells through the function of ODD domain and followed by quick clearance of free fluorescent dye. On the other hand, this prove is stabilized in hypoxic tumor cells and thus the dye is stayed in the cells. Between normoxic and hypoxic conditions, the difference in the clearance rate of the dye will reveals suited contrast for tumor-hypoxia imaging. The optical imaging probe has not been optimized yet but the results presented here exhibit a potential of PTD-ODD fusion protein as a hypoxia-specific imaging probe.

  13. An image acquisition and registration strategy for the fusion of hyperpolarized helium-3 MRI and x-ray CT images of the lung

    Science.gov (United States)

    Ireland, Rob H.; Woodhouse, Neil; Hoggard, Nigel; Swinscoe, James A.; Foran, Bernadette H.; Hatton, Matthew Q.; Wild, Jim M.

    2008-11-01

    The purpose of this ethics committee approved prospective study was to evaluate an image acquisition and registration protocol for hyperpolarized helium-3 magnetic resonance imaging (3He-MRI) and x-ray computed tomography. Nine patients with non-small cell lung cancer (NSCLC) gave written informed consent to undergo a free-breathing CT, an inspiration breath-hold CT and a 3D ventilation 3He-MRI in CT position using an elliptical birdcage radiofrequency (RF) body coil. 3He-MRI to CT image fusion was performed using a rigid registration algorithm which was assessed by two observers using anatomical landmarks and a percentage volume overlap coefficient. Registration of 3He-MRI to breath-hold CT was more accurate than to free-breathing CT; overlap 82.9 ± 4.2% versus 59.8 ± 9.0% (p < 0.001) and mean landmark error 0.75 ± 0.24 cm versus 1.25 ± 0.60 cm (p = 0.002). Image registration is significantly improved by using an imaging protocol that enables both 3He-MRI and CT to be acquired with similar breath holds and body position through the use of a birdcage 3He-MRI body RF coil and an inspiration breath-hold CT. Fusion of 3He-MRI to CT may be useful for the assessment of patients with lung diseases.

  14. Three-way (N-way) fusion of brain imaging data based on mCCA+jICA and its application to discriminating schizophrenia

    NARCIS (Netherlands)

    J. Sui (Jing); H. He (Hao); G. Pearlson (Godfrey); T. Adali (Tülay); K.A. Kiehl (Kent ); Q. Yu (Qingbao); V.P. Clark; E. Castro (Elena); T.J.H. White (Tonya); B.A. Mueller (Bryon ); B.C. Ho (Beng ); N.C. Andreasen; V.D. Calhoun (Vince)

    2013-01-01

    textabstractMultimodal fusion is an effective approach to better understand brain diseases. However, most such instances have been limited to pair-wise fusion; because there are often more than two imaging modalities available per subject, there is a need for approaches that can combine multiple

  15. Data Fusion and Fuzzy Clustering on Ratio Images for Change Detection in Synthetic Aperture Radar Images

    Directory of Open Access Journals (Sweden)

    Wenping Ma

    2014-01-01

    Full Text Available The unsupervised approach to change detection via synthetic aperture radar (SAR images becomes more and more popular. The three-step procedure is the most widely used procedure, but it does not work well with the Yellow River Estuary dataset obtained by two synthetic aperture radars. The difference of the two radars in imaging techniques causes severe noise, which seriously affects the difference images generated by a single change detector in step two, producing the difference image. To deal with problem, we propose a change detector to fuse the log-ratio (LR and the mean-ratio (MR images by a context independent variable behavior (CIVB operator and can utilize the complement information in two ratio images. In order to validate the effectiveness of the proposed change detector, the change detector will be compared with three other change detectors, namely, the log-ratio (LR, mean-ratio (MR, and the wavelet-fusion (WR operator, to deal with three datasets with different characteristics. The four operators are applied not only in a widely used three-step procedure but also in a new approach. The experiments show that the false alarms and overall errors of change detection are greatly reduced, and the kappa and KCC are improved a lot. And its superiority can also be observed visually.

  16. Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.

  17. Diagnostic Accuracy of Multiparametric Magnetic Resonance Imaging and Fusion Guided Targeted Biopsy Evaluated by Transperineal Template Saturation Prostate Biopsy for the Detection and Characterization of Prostate Cancer.

    Science.gov (United States)

    Mortezavi, Ashkan; Märzendorfer, Olivia; Donati, Olivio F; Rizzi, Gianluca; Rupp, Niels J; Wettstein, Marian S; Gross, Oliver; Sulser, Tullio; Hermanns, Thomas; Eberli, Daniel

    2018-02-21

    We evaluated the diagnostic accuracy of multiparametric magnetic resonance imaging and multiparametric magnetic resonance imaging/transrectal ultrasound fusion guided targeted biopsy against that of transperineal template saturation prostate biopsy to detect prostate cancer. We retrospectively analyzed the records of 415 men who consecutively presented for prostate biopsy between November 2014 and September 2016 at our tertiary care center. Multiparametric magnetic resonance imaging was performed using a 3 Tesla device without an endorectal coil, followed by transperineal template saturation prostate biopsy with the BiopSee® fusion system. Additional fusion guided targeted biopsy was done in men with a suspicious lesion on multiparametric magnetic resonance imaging, defined as Likert score 3 to 5. Any Gleason pattern 4 was defined as clinically significant prostate cancer. The detection rates of multiparametric magnetic resonance imaging and fusion guided targeted biopsy were compared with the detection rate of transperineal template saturation prostate biopsy using the McNemar test. We obtained a median of 40 (range 30 to 55) and 3 (range 2 to 4) transperineal template saturation prostate biopsy and fusion guided targeted biopsy cores, respectively. Of the 124 patients (29.9%) without a suspicious lesion on multiparametric magnetic resonance imaging 32 (25.8%) were found to have clinically significant prostate cancer on transperineal template saturation prostate biopsy. Of the 291 patients (70.1%) with a Likert score of 3 to 5 clinically significant prostate cancer was detected in 129 (44.3%) by multiparametric magnetic resonance imaging fusion guided targeted biopsy, in 176 (60.5%) by transperineal template saturation prostate biopsy and in 187 (64.3%) by the combined approach. Overall 58 cases (19.9%) of clinically significant prostate cancer would have been missed if fusion guided targeted biopsy had been performed exclusively. The sensitivity of

  18. Fusion of CT coronary angiography and whole-heart dynamic 3D cardiac MR perfusion: building a framework for comprehensive cardiac imaging.

    Science.gov (United States)

    von Spiczak, Jochen; Manka, Robert; Gotschy, Alexander; Oebel, Sabrina; Kozerke, Sebastian; Hamada, Sandra; Alkadhi, Hatem

    2018-04-01

    The purpose of this work was to develop a framework for 3D fusion of CT coronary angiography (CTCA) and whole-heart dynamic 3D cardiac magnetic resonance perfusion (3D-CMR-Perf) image data-correlating coronary artery stenoses to stress-induced myocardial perfusion deficits for the assessment of coronary artery disease (CAD). Twenty-three patients who underwent CTCA and 3D-CMR-Perf for various indications were included retrospectively. For CTCA, image quality and coronary diameter stenoses > 50% were documented. For 3D-CMR-Perf, image quality and stress-induced perfusion deficits were noted. A software framework was developed to allow for 3D image fusion of both datasets. Computation steps included: (1) fully automated segmentation of coronary arteries and heart contours from CT; (2) manual segmentation of the left ventricle in 3D-CMR-Perf images; (3) semi-automatic co-registration of CT/CMR datasets; (4) projection of the 3D-CMR-Perf values on the CT left ventricle. 3D fusion analysis was compared to separate inspection of CTCA and 3D-CMR-Perf data. CT and CMR scans resulted in an image quality being rated as good to excellent (mean scores 3.5 ± 0.5 and 3.7 ± 0.4, respectively, scale 1-4). 3D-fusion was feasible in all 23 patients, and perfusion deficits could be correlated to culprit coronary lesions in all but one case (22/23 = 96%). Compared to separate analysis of CT and CMR data, coronary supply territories of 3D-CMR-Perf perfusion deficits were refined in two cases (2/23 = 9%), and the relevance of stenoses in CTCA was re-judged in four cases (4/23 = 17%). In conclusion, 3D fusion of CTCA/3D-CMR-Perf facilitates anatomic correlation of coronary lesions and stress-induced myocardial perfusion deficits thereby helping to refine diagnostic assessment of CAD.

  19. Neutron imaging development for megajoule scale inertial confinement fusion experiments{sup 1}

    Energy Technology Data Exchange (ETDEWEB)

    Grim, G P; Bradley, P A; Day, R D; Clark, D D; Fatherley, V E; Finch, J P; Garcia, F P; Jaramillo, S A; Montoya, A J; Morgan, G L; Oertel, J A; Ortiz, T A; Payton, J R; Pazuchanics, P; Schmidt, D W; Valdez, A C; Wilde, C H; Wilke, M D; Wilson, D C [Los Alamos National Laboratory, PO Box 1663, Los Alamos, NM 87545 (United States)], E-mail: gpgrim@lanl.gov

    2008-05-15

    Neutron imaging of Inertial Confinement Fusion (ICF) targets is useful for understanding the implosion conditions of deuterium and tritium filled targets at Mega-Joule/Tera-Watt scale laser facilities. The primary task for imaging ICF targets at the National Ignition Facility, Lawrence Livermore National Laboratory, Livermore CA, is to determine the asymmetry of the imploded target. The image data, along with other nuclear information, are to be used to provide insight into target drive conditions. The diagnostic goal at the National Ignition Facility is to provide neutron images with 10 {mu}m resolution and peak signal-to-background values greater than 20 for neutron yields of {approx} 10{sup 15}. To achieve this requires signal multiplexing apertures with good resolution. In this paper we present results from imaging system development efforts aimed at achieving these requirements using neutron pinholes. The data were collected using directly driven ICF targets at the Omega Laser, University of Rochester, Rochester, NY., and include images collected from a 3 x 3 array of 15.5 {mu}m pinholes. Combined images have peak signal-to-background values greater than 30 at neutron yields of {approx} 10{sup 13}.

  20. Prediction of enthalpy of fusion of pure compounds using an Artificial Neural Network-Group Contribution method

    International Nuclear Information System (INIS)

    Gharagheizi, Farhad; Salehi, Gholam Reza

    2011-01-01

    Highlights: → An Artificial Neural Network-Group Contribution method is presented for prediction of enthalpy of fusion of pure compounds at their normal melting point. → Validity of the model is confirmed using a large evaluated data set containing 4157 pure compounds. → The average percent error of the model is equal to 2.65% in comparison with the experimental data. - Abstract: In this work, the Artificial Neural Network-Group Contribution (ANN-GC) method is applied to estimate the enthalpy of fusion of pure chemical compounds at their normal melting point. 4157 pure compounds from various chemical families are investigated to propose a comprehensive and predictive model. The obtained results show the Squared Correlation Coefficient (R 2 ) of 0.999, Root Mean Square Error of 0.82 kJ/mol, and average absolute deviation lower than 2.65% for the estimated properties from existing experimental values.

  1. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    Science.gov (United States)

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Multisensor fusion in gastroenterology domain through video and echo endoscopic image combination: a challenge

    Science.gov (United States)

    Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian

    2001-08-01

    Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.

  3. Clinical use of digital retrospective image fusion of CT, MRI, FDG-PET and SPECT - fields of indications and results; Klinischer Einsatz der digitalen retrospektiven Bildfusion von CT, MRT, FDG-PET und SPECT - Anwendungsgebiete und Ergebnisse

    Energy Technology Data Exchange (ETDEWEB)

    Lemke, A.J.; Niehues, S.M.; Amthauer, H.; Felix, R. [Campus Virchow-Klinikum, Klinik fuer Strahlenheilkunde, Charite, Universitaetsmedizin Berlin (Germany); Rohlfing, T. [Dept. of Neurosurgery, Stanford Univ. (United States); Hosten, N. [Inst. fuer Diagnostische Radiologie, Ernst-Moritz-Arndt-Univ. Greifswald (Germany)

    2004-12-01

    Purpose: To evaluate the feasibility and the clinical benefits of retrospective digital image fusion (PET, SPECT, CT and MRI). Materials and methods: In a prospective study, a total of 273 image fusions were performed and evaluated. The underlying image acquisitions (CT, MRI, SPECT and PET) were performed in a way appropriate for the respective clinical question and anatomical region. Image fusion was executed with a software program developed during this study. The results of the image fusion procedure were evaluated in terms of technical feasibility, clinical objective, and therapeutic impact. Results: The most frequent combinations of modalities were CT/PET (n = 156) and MRI/PET (n = 59), followed by MRI/SPECT (n = 28), CT/SPECT (n = 22) and CT/MRI (n = 8). The clinical questions included following regions (more than one region per case possible): neurocranium (n = 42), neck (n = 13), lung and mediastinum (n = 24), abdomen (n = 181), and pelvis (n = 65). In 92.6% of all cases (n = 253), image fusion was technically successful. Image fusion was able to improve sensitivity and specificity of the single modality, or to add important diagnostic information. Image fusion was problematic in cases of different body positions between the two imaging modalities or different positions of mobile organs. In 37.9% of the cases, image fusion added clinically relevant information compared to the single modality. Conclusion: For clinical questions concerning liver, pancreas, rectum, neck, or neurocranium, image fusion is a reliable method suitable for routine clinical application. Organ motion still limits its feasibility and routine use in other areas (e.g., thorax). (orig.)

  4. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub [UNETWARE, Seoul (Korea, Republic of); Kim, Won Tae [Kongju National University, Gongju (Korea, Republic of)

    2010-04-15

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  5. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    International Nuclear Information System (INIS)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub; Kim, Won Tae

    2010-01-01

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  6. Soft sensor design by multivariate fusion of image features and process measurements

    DEFF Research Database (Denmark)

    Lin, Bao; Jørgensen, Sten Bay

    2011-01-01

    This paper presents a multivariate data fusion procedure for design of dynamic soft sensors where suitably selected image features are combined with traditional process measurements to enhance the performance of data-driven soft sensors. A key issue of fusing multiple sensor data, i.e. to determine...... with a multivariate analysis technique from RGB pictures. The color information is also transformed to hue, saturation and intensity components. Both sets of image features are combined with traditional process measurements to obtain an inferential model by partial least squares (PLS) regression. A dynamic PLS model...... oxides (NOx) emission of cement kilns. On-site tests demonstrate improved performance over soft sensors based on conventional process measurements only....

  7. An image acquisition and registration strategy for the fusion of hyperpolarized helium-3 MRI and x-ray CT images of the lung

    International Nuclear Information System (INIS)

    Ireland, Rob H; Woodhouse, Neil; Hoggard, Nigel; Swinscoe, James A; Foran, Bernadette H; Hatton, Matthew Q; Wild, Jim M

    2008-01-01

    The purpose of this ethics committee approved prospective study was to evaluate an image acquisition and registration protocol for hyperpolarized helium-3 magnetic resonance imaging ( 3 He-MRI) and x-ray computed tomography. Nine patients with non-small cell lung cancer (NSCLC) gave written informed consent to undergo a free-breathing CT, an inspiration breath-hold CT and a 3D ventilation 3 He-MRI in CT position using an elliptical birdcage radiofrequency (RF) body coil. 3 He-MRI to CT image fusion was performed using a rigid registration algorithm which was assessed by two observers using anatomical landmarks and a percentage volume overlap coefficient. Registration of 3 He-MRI to breath-hold CT was more accurate than to free-breathing CT; overlap 82.9 ± 4.2% versus 59.8 ± 9.0% (p 3 He-MRI and CT to be acquired with similar breath holds and body position through the use of a birdcage 3 He-MRI body RF coil and an inspiration breath-hold CT. Fusion of 3 He-MRI to CT may be useful for the assessment of patients with lung diseases.

  8. Information Fusion for High Level Situation Assessment and Prediction

    National Research Council Canada - National Science Library

    Ji, Qiang

    2007-01-01

    .... In addition, we developed algorithms for performing active information fusion to improve both fusion accuracy and efficiency so that decision making and situation assessment can be made in a timely and efficient manner...

  9. Optical asymmetric watermarking using modified wavelet fusion and diffractive imaging

    Science.gov (United States)

    Mehra, Isha; Nishchal, Naveen K.

    2015-05-01

    In most of the existing image encryption algorithms the generated keys are in the form of a noise like distribution with a uniform distributed histogram. However, the noise like distribution is an apparent sign indicating the presence of the keys. If the keys are to be transferred through some communication channels, then this may lead to a security problem. This is because; the noise like features may easily catch people's attention and bring more attacks. To address this problem it is required to transfer the keys to some other meaningful images to disguise the attackers. The watermarking schemes are complementary to image encryption schemes. In most of the iterative encryption schemes, support constraints play an important role of the keys in order to decrypt the meaningful data. In this article, we have transferred the support constraints which are generated by axial translation of CCD camera using amplitude-, and phase- truncation approach, into different meaningful images. This has been done by developing modified fusion technique in wavelet transform domain. The second issue is, in case, the meaningful images are caught by the attacker then how to solve the copyright protection. To resolve this issue, watermark detection plays a crucial role. For this purpose, it is necessary to recover the original image using the retrieved watermarks/support constraints. To address this issue, four asymmetric keys have been generated corresponding to each watermarked image to retrieve the watermarks. For decryption, an iterative phase retrieval algorithm is applied to extract the plain-texts from corresponding retrieved watermarks.

  10. Fusion of Nonionic Vesicles

    DEFF Research Database (Denmark)

    Bulut, Sanja; Oskolkova, M. Z.; Schweins, R.

    2010-01-01

    We present an experimental study of vesicle fusion using light and neutron scattering to monitor fusion events. Vesicles are reproducibly formed with an extrusion procedure using an single amphiphile triethylene glycol mono-n-decyl ether in water. They show long-term stability for temperatures ar...... a barrier to fusion changing from 15 k(B)T at T = 26 degrees C to 10k(H) T at T = 35 degrees C. These results are compatible with the theoretical predictions using the stalk model of vesicle fusion....

  11. Detection of early stage atherosclerotic plaques using PET and CT fusion imaging targeting P-selectin in low density lipoprotein receptor-deficient mice

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Ikuko, E-mail: nakamuri@riken.jp [RIKEN Center for Molecular Imaging Science, Kobe (Japan); Department of Cardiovascular Medicine, Saga University, Saga (Japan); Hasegawa, Koki [RIKEN Center for Molecular Imaging Science, Kobe (Japan); Department of Pathology and Experimental Medicine, Kumamoto University, Kumamoto (Japan); Wada, Yasuhiro [RIKEN Center for Molecular Imaging Science, Kobe (Japan); Hirase, Tetsuaki; Node, Koichi [Department of Cardiovascular Medicine, Saga University, Saga (Japan); Watanabe, Yasuyoshi, E-mail: yywata@riken.jp [RIKEN Center for Molecular Imaging Science, Kobe (Japan)

    2013-03-29

    Highlights: ► P-selectin regulates leukocyte recruitment as an early stage event of atherogenesis. ► We developed an antibody-based molecular imaging probe targeting P-selectin for PET. ► This is the first report on successful PET imaging for delineation of P-selectin. ► P-selectin is a candidate target for atherosclerotic plaque imaging by clinical PET. -- Abstract: Background: Sensitive detection and qualitative analysis of atherosclerotic plaques are in high demand in cardiovascular clinical settings. The leukocyte–endothelial interaction mediated by an adhesion molecule P-selectin participates in arterial wall inflammation and atherosclerosis. Methods and results: A {sup 64}Cu-1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid conjugated anti-P-selectin monoclonal antibody ({sup 64}Cu-DOTA-anti-P-selectin mAb) probe was prepared by conjugating an anti-P-selectin monoclonal antibody with DOTA followed by {sup 64}Cu labeling. Thirty-six hours prior to PET and CT fusion imaging, 3 MBq of {sup 64}Cu-DOTA-anti-P-selectin mAb was intravenously injected into low density lipoprotein receptor-deficient Ldlr-/- mice. After a 180 min PET scan, autoradiography and biodistribution of {sup 64}Cu-DOTA-anti-P-selectin monoclonal antibody was examined using excised aortas. In Ldlr-/- mice fed with a high cholesterol diet for promotion of atherosclerotic plaque development, PET and CT fusion imaging revealed selective and prominent accumulation of the probe in the aortic root. Autoradiography of aortas that demonstrated probe uptake into atherosclerotic plaques was confirmed by Oil red O staining for lipid droplets. In Ldlr-/- mice fed with a chow diet to develop mild atherosclerotic plaques, probe accumulation was barely detectable in the aortic root on PET and CT fusion imaging. Probe biodistribution in aortas was 6.6-fold higher in Ldlr-/- mice fed with a high cholesterol diet than in those fed with a normal chow diet. {sup 64}Cu-DOTA-anti-P-selectin m

  12. Effect of breakup on near barrier fusion

    International Nuclear Information System (INIS)

    Dasgupta, M.; Berriman, A.C.; Butt, R.D.; Hinde, D.J.; Morton, C.R.; Newton, J.O.

    2000-01-01

    Full text: Unstable neutron-rich nuclei having very weakly bound neutrons exhibit characteristic features such as a neutron halo extending to large radii, and a low energy threshold for breakup. These features may dramatically affect fusion and other reaction processes. It is well accepted that the extended nuclear matter distribution will lead to an enhancement in fusion cross-sections over those for tightly bound nuclei. The effect of couplings to channels which act as doorways to breakup is, however, controversial, with model predictions differing in the relative magnitudes of enhancement and suppression. To investigate the effect on fusion of couplings specific to unstable neutron-rich nuclei, it is necessary to understand (and then predict) the cross-sections expected for their stable counterparts. This requires knowledge of the energy of the average fusion barrier, and information on the couplings. Experimentally all this information can be obtained from precisely measured fusion cross-sections. Such precision measurements of complete fusion cross-sections for 9 Be + 208 Pb and 6 Li, 7 Li + 209 Bi systems have been done at the Australian National University. The distribution of fusion barriers extracted from these data were used to reliably predict the expected fusion cross-sections. Comparison of the theoretical expectations with the experimentally measured cross-sections show conclusively that complete fusion, at above barrier energies, for all three systems is suppressed (by about 30%) compared with the fusion of more tightly bound nuclei. These measurements, in conjunction with incomplete fusion cross-sections, which were also measured, should encourage a complete theoretical description of fusion and breakup

  13. Relation between lung perfusion defects and intravascular clots in acute pulmonary thromboembolism: assessment with breath-hold SPECT-CT pulmonary angiography fusion images.

    Science.gov (United States)

    Suga, Kazuyoshi; Yasuhiko, Kawakami; Iwanaga, Hideyuki; Tokuda, Osamu; Matsunaga, Naofumi

    2008-09-01

    The relation between lung perfusion defects and intravascular clots in acute pulmonary thromboembolism (PTE) was comprehensively assessed on deep-inspiratory breath-hold (DIBrH) perfusion SPECT-computed tomographic pulmonary angiography (CTPA) fusion images. Subjects were 34 acute PTE patients, who had successfully performed DIBrH perfusion SPECT using a dual-headed SPECT and a respiratory tracking system. Automated DIBrH SPECT-CTPA fusion images were used to assess the relation between lung perfusion defects and intravascular clots detected by CTPA. DIBrH SPECT visualized 175 lobar/segmental or subsegmental defects in 34 patients, and CTPA visualized 61 intravascular clots at variable locations in 30 (88%) patients, but no clots in four (12%) patients. In 30 patients with clots, the fusion images confirmed that 69 (41%) perfusion defects (20 segmental, 45 subsegmental and 4 lobar defects) of total 166 defects were located in lung territories without clots, although the remaining 97 (58%) defects were located in lung territories with clots. Perfusion defect was absent in lung territories with clots (one lobar branch and three segmental branches) in four (12%) of these patients. In four patients without clots, nine perfusion defects including four segmental ones were present. Because of unexpected dissociation between intravascular clots and lung perfusion defects, the present fusion images will be a useful adjunct to CTPA in the diagnosis of acute PTE.

  14. Clinical significance of MRI/{sup 18}F-FDG PET fusion imaging of the spinal cord in patients with cervical compressive myelopathy

    Energy Technology Data Exchange (ETDEWEB)

    Uchida, Kenzo; Nakajima, Hideaki; Watanabe, Shuji; Yoshida, Ai; Baba, Hisatoshi [University of Fukui, Department of Orthopaedics and Rehabilitation Medicine, Faculty of Medical Sciences, Eiheiji, Fukui (Japan); Okazawa, Hidehiko [University of Fukui, Department of Biomedical Imaging Research Center, Eiheiji, Fukui (Japan); Kimura, Hirohiko [University of Fukui, Departments of Radiology, Faculty of Medical Sciences, Eiheiji, Fukui (Japan); Kudo, Takashi [Nagasaki University, Department of Radioisotope Medicine, Atomic Bomb Disease and Hibakusha Medicine Unit, Atomic Bomb Disease Institute, Nagasaki (Japan)

    2012-10-15

    {sup 18}F-FDG PET is used to investigate the metabolic activity of neural tissue. MRI is used to visualize morphological changes, but the relationship between intramedullary signal changes and clinical outcome remains controversial. The present study was designed to evaluate the use of 3-D MRI/{sup 18}F-FDG PET fusion imaging for defining intramedullary signal changes on MRI scans and local glucose metabolic rate measured on {sup 18}F-FDG PET scans in relation to clinical outcome and prognosis. We studied 24 patients undergoing decompressive surgery for cervical compressive myelopathy. All patients underwent 3-D MRI and {sup 18}F-FDG PET before surgery. Quantitative analysis of intramedullary signal changes on MRI scans included calculation of the signal intensity ratio (SIR) as the ratio between the increased lesional signal intensity and the signal intensity at the level of the C7/T1 disc. Using an Advantage workstation, the same slices of cervical 3-D MRI and {sup 18}F-FDG PET images were fused. On the fused images, the maximal count of the lesion was adopted as the standardized uptake value (SUV{sub max}). In a similar manner to SIR, the SUV ratio (SUVR) was also calculated. Neurological assessment was conducted using the Japanese Orthopedic Association (JOA) scoring system for cervical myelopathy. The SIR on T1-weighted (T1-W) images, but not SIR on T2-W images, was significantly correlated with preoperative JOA score and postoperative neurological improvement. Lesion SUV{sub max} was significantly correlated with SIR on T1-W images, but not with SIR on T2-W images, and also with postoperative neurological outcome. The SUVR correlated better than SIR on T1-W images and lesion SUV{sub max} with neurological improvement. Longer symptom duration was correlated negatively with SIR on T1-W images, positively with SIR on T2-W images, and negatively with SUV{sub max}. Our results suggest that low-intensity signal on T1-W images, but not on T2-W images, is correlated

  15. Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses

    Science.gov (United States)

    Shahbazian, Elisa

    1995-09-01

    Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.

  16. Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.

    Science.gov (United States)

    Nath, Abhigyan; Subbiah, Karthikeyan

    2015-12-01

    Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance

  17. A hierarchical structure approach to MultiSensor Information Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. (Tennessee Univ., Tullahoma, TN (United States). Space Inst.); Pap, R.M.; Harston, C.T. (Accurate Automation Corp., Chattanooga, TN (United States))

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  18. A hierarchical structure approach to MultiSensor Information Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. [Tennessee Univ., Tullahoma, TN (United States). Space Inst.; Pap, R.M.; Harston, C.T. [Accurate Automation Corp., Chattanooga, TN (United States)

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  19. X-ray crystal imagers for inertial confinement fusion experiments (invited)

    International Nuclear Information System (INIS)

    Aglitskiy, Y.; Lehecka, T.; Obenschain, S.; Pawley, C.; Brown, C.M.; Seely, J.

    1999-01-01

    We report on our continued development of high resolution monochromatic x-ray imaging system based on spherically curved crystals. This system can be extensively used in the relevant experiments of the inertial confinement fusion (ICF) program. The system is currently used, but not limited to diagnostics of the targets ablatively accelerated by the Nike KrF laser. A spherically curved quartz crystal (2d=6.68703 Angstrom, R=200mm) has been used to produce monochromatic backlit images with the He-like Si resonance line (1865 eV) as the source of radiation. Another quartz crystal (2d=8.5099 Angstrom, R=200mm) with the H-like Mg resonance line (1473 eV) has been used for backlit imaging with higher contrast. The spatial resolution of the x-ray optical system is 1.7 μm in selected places and 2 - 3 μm over a larger area. A second crystal with a separate backlighter was added to the imaging system. This makes it possible to make use of all four strips of the framing camera. Time resolved, 20x magnified, backlit monochromatic images of CH planar targets driven by the Nike facility have been obtained with spatial resolution of 2.5 μm in selected places and 5 μm over the focal spot of the Nike laser. We are exploring the enhancement of this technique to the higher and lower backlighter energies. copyright 1999 American Institute of Physics

  20. Initial clinical assessment of CT-MRI image fusion software in localization of the prostate for 3D conformal radiation therapy

    International Nuclear Information System (INIS)

    Kagawa, Kazufumi; Lee, W. Robert; Schultheiss, Timothy E.; Hunt, Margie A.; Shaer, Andrew H.; Hanks, Gerald E.

    1997-01-01

    Purpose: To assess the utility of image fusion software and compare MRI prostate localization with CT localization in patients undergoing 3D conformal radiation therapy of prostate cancer. Materials and Methods: After a phantom study was performed to ensure the accuracy of image fusion procedure, 22 prostate cancer patients had CT and MRI studies before the start of radiotherapy. Immobilization casts used during radiation treatment were also used for both imaging studies. After the clinical target volume (CTV) (prostate or prostate + seminal vesicles) was defined on CT, slices from the MRI study were reconstructed to precisely match the CT slices by identifying three common bony landmarks on each study. The CTV was separately defined on the matched MRI slices. Data related to the size and location of the prostate were compared between CT and MRI. The spatial relationship between the tip of urethrogram cone on CT and prostate apex seen on MRI was also estimated. Results: The phantom study showed the registration discrepancies between CT and MRI smaller than 1.0 mm in any pair in comparison. The patient study showed a mean image registration error of 0.9 (± 0.6) mm. The average prostate volume was 63.0 (± 25.8) cm 3 and 50.9 (± 22.9) cm 3 determined by CT and MRI, respectively. The difference in prostate location with the two studies usually differed at the base and at the apex of the prostate. On the transverse MRI, the prostate apex was situated 7.1 (± 4.5) mm dorsal and 15.1 (± 4.0) mm cephalad to the tip of urethrogram cone. Conclusions: CT-MRI image fusion study made it possible to compare the two modalities directly. MRI localization of the prostate is more accurate than CT, and indicates the distance from cone to apex is 15 mm. CT-MRI image fusion technique provides valuable supplements to CT technology for more precise targeting of the prostate cancer

  1. Fusion barrier distributions - What have we learned?

    International Nuclear Information System (INIS)

    Hinde, D. J.; Dasgupta, M.

    1998-01-01

    The study of nuclear fusion received a strong impetus from the realisation that an experimental fusion barrier distribution could be determined from precisely measured fusion cross-sections. Experimental data for different reactions have shown in the fusion barrier distributions clear signatures of a range of nuclear excitations, for example the effects of static quadrupole and hexadecapole deformations, single- and double-phonon states, transfer of nucleons, and high-lying excited states. The improved understanding of fusion barrier distributions allows more reliable prediction of fusion angular momentum distributions, which aids interpretation of fission probabilities and fission anisotropies, and understanding of the population of super-deformed bands for nuclear structure studies. Studies of the relationship between the fusion barrier distribution and the extra-push energy should improve our understanding of the mechanism of the extra-push effect, and may help to predict new ways of forming very heavy or super-heavy nuclei

  2. Comparison of Spatiotemporal Fusion Models: A Review

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2015-02-01

    Full Text Available Simultaneously capturing spatial and temporal dynamics is always a challenge for the remote sensing community. Spatiotemporal fusion has gained wide interest in various applications for its superiority in integrating both fine spatial resolution and frequent temporal coverage. Though many advances have been made in spatiotemporal fusion model development and applications in the past decade, a unified comparison among existing fusion models is still limited. In this research, we classify the models into three categories: transformation-based, reconstruction-based, and learning-based models. The objective of this study is to (i compare four fusion models (STARFM, ESTARFM, ISTAFM, and SPSTFM under a one Landsat-MODIS (L-M pair prediction mode and two L-M pair prediction mode using time-series datasets from the Coleambally irrigation area and Poyang Lake wetland; (ii quantitatively assess prediction accuracy considering spatiotemporal comparability, landscape heterogeneity, and model parameter selection; and (iii discuss the advantages and disadvantages of the three categories of spatiotemporal fusion models.

  3. Cooling-load prediction by the combination of rough set theory and an artificial neural-network based on data-fusion technique

    International Nuclear Information System (INIS)

    Hou Zhijian; Lian Zhiwei; Yao Ye; Yuan Xinjian

    2006-01-01

    A novel method integrating rough sets (RS) theory and an artificial neural network (ANN) based on data-fusion technique is presented to forecast an air-conditioning load. Data-fusion technique is the process of combining multiple sensors data or related information to estimate or predict entity states. In this paper, RS theory is applied to find relevant factors to the load, which are used as inputs of an artificial neural-network to predict the cooling load. To improve the accuracy and enhance the robustness of load forecasting results, a general load-prediction model, by synthesizing multi-RSAN (MRAN), is presented so as to make full use of redundant information. The optimum principle is employed to deduce the weights of each RSAN model. Actual prediction results from a real air-conditioning system show that, the MRAN forecasting model is better than the individual RSAN and moving average (AMIMA) ones, whose relative error is within 4%. In addition, individual RSAN forecasting results are better than that of ARIMA

  4. A DATA FUSION SYSTEM FOR THE NONDESTRUCTIVE EVALUATION OF NON-PIGGABLE PIPES

    Energy Technology Data Exchange (ETDEWEB)

    Shreekanth Mandayam; Robi Polikar; John C. Chen

    2004-04-01

    The objectives of this research project are: (1) To design sensor data fusion algorithms that can synergistically combine defect related information from heterogeneous sensors used in gas pipeline inspection for reliably and accurately predicting the condition of the pipe-wall. (2) To develop efficient data management techniques for signals obtained during multisensor interrogation of a gas pipeline. During this reporting period, Rowan University designed, developed and exercised multisensor data fusion algorithms for identifying defect related information present in magnetic flux leakage, ultrasonic testing and thermal imaging nondestructive evaluation signatures of a test-specimen suite representative of benign and anomalous indications in gas transmission pipelines.

  5. [An improved low spectral distortion PCA fusion method].

    Science.gov (United States)

    Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong

    2013-10-01

    Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.

  6. Clay content prediction using on-the-go proximal soil sensor fusion

    DEFF Research Database (Denmark)

    Tabatabai, Salman; Knadel, Maria; Greve, Mogens Humlekrog

    on soil usability, very few studies so far have provided robust and accurate predictions for fields with high clay content variability. An on-the-go multi-sensor platform was used to measure topsoil (25cm) VNIR spectra and temperature as well as electrical conductivity of top 30cm and top 90cm in 5 fields...... least squares regression (PLSR) and support vector machines regression (SVMR) were performed using VNIR spectra, EC and soil temperature as predictors and clay content as the response variable. PLSR and SVMR models were validated using full and 20-segment cross-validation respectively. The results were...... highly accurate with R2 of 0.91 and 0.93, root mean square error (RMSE) of 1.19 and 1.08, and ratio of performance to interquartile range (RPIQ) of 4.6 and 5.1 for PLSR and SVMR respectively. This shows the high potential of on-the-go soil sensor fusion to predict soil clay content and automate...

  7. Predicting plant biomass accumulation from image-derived parameters

    Science.gov (United States)

    Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian

    2018-01-01

    Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559

  8. Evaluation of MRI-US Fusion Technology in Sports-Related Musculoskeletal Injuries.

    Science.gov (United States)

    Wong-On, Manuel; Til-Pérez, Lluís; Balius, Ramón

    2015-06-01

    A combination of magnetic resonance imaging (MRI) with real-time high-resolution ultrasound (US) known as fusion imaging may improve visualization of musculoskeletal (MSK) sports medicine injuries. The aim of this study was to evaluate the applicability of MRI-US fusion technology in MSK sports medicine. This study was conducted by the medical services of the FC Barcelona. The participants included volunteers and referred athletes with symptomatic and asymptomatic MSK injuries. All cases underwent MRI which was loaded into the US system for manual registration on the live US image and fusion imaging examination. After every test, an evaluation form was completed in terms of advantages, disadvantages, and anatomic fusion landmarks. From November 2014 to March 2015, we evaluated 20 subjects who underwent fusion imaging, 5 non-injured volunteers and 15 injured athletes, 11 symptomatic and 4 asymptomatic, age range 16-50 years, mean 22. We describe some of the anatomic landmarks used to guide fusion in different regions. This technology allowed us to examine muscle and tendon injuries simultaneously in US and MRI, and the correlation of both techniques, especially low-grade muscular injuries. This has also helped compensate for the limited field of view with US. It improves spatial orientation of cartilage, labrum and meniscal injuries. However, a high-quality MRI image is essential in achieving an adequate fusion image, and 3D sequences need to be added in MRI protocols to improve navigation. The combination of real-time MRI and US image fusion and navigation is relatively easy to perform and is helping to improve understanding of MSK injuries. However, it requires specific skills in MSK imaging and still needs further research in sports-related injuries. Toshiba Medical Systems Corporation.

  9. Measuring and Predicting Tag Importance for Image Retrieval.

    Science.gov (United States)

    Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay

    2017-12-01

    Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.

  10. Dim target detection method based on salient graph fusion

    Science.gov (United States)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  11. Covariance descriptor fusion for target detection

    Science.gov (United States)

    Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih

    2016-05-01

    Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.

  12. Flare Prediction Using Photospheric and Coronal Image Data

    Science.gov (United States)

    Jonas, E.; Shankar, V.; Bobra, M.; Recht, B.

    2016-12-01

    We attempt to forecast M-and X-class solar flares using a machine-learning algorithm and five years of image data from both the Helioseismic and Magnetic Imager (HMI) and Atmospheric Imaging Assembly (AIA) instruments aboard the Solar Dynamics Observatory. HMI is the first instrument to continuously map the full-disk photospheric vector magnetic field from space (Schou et al., 2012). The AIA instrument maps the transition region and corona using various ultraviolet wavelengths (Lemen et al., 2012). HMI and AIA data are taken nearly simultaneously, providing an opportunity to study the entire solar atmosphere at a rapid cadence. Most flare forecasting efforts described in the literature use some parameterization of solar data - typically of the photospheric magnetic field within active regions. These numbers are considered to capture the information in any given image relevant to predicting solar flares. In our approach, we use HMI and AIA images of solar active regions and a deep convolutional kernel network to predict solar flares. This is effectively a series of shallow-but-wide random convolutional neural networks stacked and then trained with a large-scale block-weighted least squares solver. This algorithm automatically determines which patterns in the image data are most correlated with flaring activity and then uses these patterns to predict solar flares. Using the recently-developed KeystoneML machine learning framework, we construct a pipeline to process millions of images in a few hours on commodity cloud computing infrastructure. This is the first time vector magnetic field images have been combined with coronal imagery to forecast solar flares. This is also the first time such a large dataset of solar images, some 8.5 terabytes of images that together capture over 3000 active regions, has been used to forecast solar flares. We evaluate our method using various flare prediction windows defined in the literature (e.g. Ahmed et al., 2013) and a novel per

  13. Inertial confinement fusion (ICF)

    International Nuclear Information System (INIS)

    Nuckolls, J.

    1977-01-01

    The principal goal of the inertial confinement fusion program is the development of a practical fusion power plant in this century. Rapid progress has been made in the four major areas of ICF--targets, drivers, fusion experiments, and reactors. High gain targets have been designed. Laser, electron beam, and heavy ion accelerator drivers appear to be feasible. Record-breaking thermonuclear conditions have been experimentally achieved. Detailed diagnostics of laser implosions have confirmed predictions of the LASNEX computer program. Experimental facilities are being planned and constructed capable of igniting high gain fusion microexplosions in the mid 1980's. A low cost long lifetime reactor design has been developed

  14. Data fusion according to the principle of polyrepresentation

    DEFF Research Database (Denmark)

    Larsen, Birger; Ingwersen, Peter; Lund, Berit

    2009-01-01

    logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded...... that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were...... the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable.The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion...

  15. Utility of 18F sodium fluoride PET/CT imaging in the evaluation of postoperative pain following surgical spine fusion.

    Science.gov (United States)

    Pouldar, D; Bakshian, S; Matthews, R; Rao, V; Manzano, M; Dardashti, S

    2017-08-01

    A retrospective case review of patients who underwent 18F sodium fluoride PET/CT imaging of the spine with postoperative pain following vertebral fusion. To determine the benefit of 18F sodium fluoride PET/CT imaging in the diagnosis of persistent pain in the postoperative spine. The diagnosis of pain generators in the postoperative spine has proven to be a diagnostic challenge. The conventional radiologic evaluation of persistent pain after spine surgery with the use of plain radiographs, MRI, and CT can often fall short of diagnosis in the complex patient. 18F sodium fluoride PET/CT imaging is an alternative tool to accurately identify a patient's source of pain in the difficult patient. This retrospective study looked at 25 adult patients who had undergone 18F sodium fluoride PET/CT imaging. All patients had persistent or recurrent back pain over the course of a 15-month period after having undergone spinal fusion surgery. All patients had inconclusive dedicated MRI. The clinical accuracy of PET/CT in identifying the pain generator and contribution to altering the decision making process was compared to the use of CT scan alone. Of the 25 patients studied, 17 patients had increased uptake on the 18F sodium fluoride PET/CT fusion images. There was a high-level correlation of radiotracer uptake to the patients' pain generator. Overall 88% of the studies were considered beneficial with either PET/CT altering the clinical diagnosis and treatment plan of the patient or confirming unnecessary surgery. 18F sodium fluoride PET/CT proves to be a useful tool in the diagnosis of complex spine pathology of the postoperative patients. In varied cases, a high correlation of metabolic activity to the source of the patient's pain was observed.

  16. Image fusion analysis of 99mTc-HYNIC-Tyr3-octreotide SPECT and diagnostic CT using an immobilisation device with external markers in patients with endocrine tumours

    International Nuclear Information System (INIS)

    Gabriel, Michael; Hausler, Florian; Moncayo, Roy; Decristoforo, Clemens; Virgolini, Irene; Bale, Reto; Kovacs, Peter

    2005-01-01

    The aim of this study was to assess the value of multimodality imaging using a novel repositioning device with external markers for fusion of single-photon emission computed tomography (SPECT) and computed tomography (CT) images. The additional benefit derived from this methodological approach was analysed in comparison with SPECT and diagnostic CT alone in terms of detection rate, reliability and anatomical assignment of abnormal findings with SPECT. Fifty-three patients (30 males, 23 females) with known or suspected endocrine tumours were studied. Clinical indications for somatostatin receptor (SSTR) scintigraphy (SPECT/CT image fusion) included staging of newly diagnosed tumours (n=14) and detection of unknown primary tumour in the presence of clinical and/or biochemical suspicion of neuroendocrine malignancy (n=20). Follow-up studies after therapy were performed in 19 patients. A mean activity of 400 MBq of 99m Tc-EDDA/HYNIC-Tyr 3 -octreotide was given intravenously. SPECT using a dual-detector scintillation camera and diagnostic multi-detector CT were sequentially performed. To ensure reproducible positioning, patients were fixed in an individualised vacuum mattress with modality-specific external markers for co-registration. SPECT and CT data were initially interpreted separately and the fused images were interpreted jointly in consensus by nuclear medicine and diagnostic radiology physicians. SPECT was true-positive (TP) in 18 patients, true-negative (TN) in 16, false-negative (FN) in ten and false-positive (FP) in nine; CT was TP in 18 patients, TN in 21, FP in ten and FN in four. With image fusion (SPECT and CT), the scan result was TP in 27 patients (50.9%), TN in 25 patients (47.2%) and FN in one patient, this FN result being caused by multiple small liver metastases; sensitivity was 95% and specificity, 100%. The difference between SPECT and SPECT/CT was statistically as significant as the difference between CT and SPECT/CT image fusion (P<0

  17. SPECT/CT Fusion in the Diagnosis of Hyperparathyroidism

    International Nuclear Information System (INIS)

    Monzen, Yoshio; Tamura, Akihisa; Okazaki, Hajime; Kurose, Taichi; Kobayashi, Masayuki; Kuraoka, Masatsugu

    2015-01-01

    In this study, we aimed to analyze the relationship between the diagnostic ability of fused single photon emission computed tomography/ computed tomography (SPECT/CT) images in localization of parathyroid lesions and the size of adenomas or hyperplastic glands. Five patients with primary hyperparathyroidism (PHPT) and 4 patients with secondary hyperparathyroidism (SHPT) were imaged 15 and 120 minutes after the intravenous injection of technetium99m-methoxyisobutylisonitrile ( 99m Tc-MIBI). All patients underwent surgery and 5 parathyroid adenomas and 10 hyperplastic glands were detected. Pathologic findings were correlated with imaging results. The SPECT/CT fusion images were able to detect all parathyroid adenomas even with the greatest axial diameter of 0.6 cm. Planar scintigraphy and SPECT imaging could not detect parathyroid adenomas with an axial diameter of 1.0 to 1.2 cm. Four out of 10 (40%) hyperplastic parathyroid glands were diagnosed, using planar and SPECT imaging and 5 out of 10 (50%) hyperplastic parathyroid glands were localized, using SPECT/CT fusion images. SPECT/CT fusion imaging is a more useful tool for localization of parathyroid lesions, particularly parathyroid adenomas, in comparison with planar and or SPECT imaging

  18. Value of image fusion using single photon emission computed tomography with integrated low dose computed tomography in comparison with a retrospective voxel-based method in neuroendocrine tumours

    International Nuclear Information System (INIS)

    Amthauer, H.; Denecke, T.; Ruf, J.; Gutberlet, M.; Felix, R.; Lemke, A.J.; Rohlfing, T.; Boehmig, M.; Ploeckinger, U.

    2005-01-01

    The objective was the evaluation of single photon emission computed tomography (SPECT) with integrated low dose computed tomography (CT) in comparison with a retrospective fusion of SPECT and high-resolution CT and a side-by-side analysis for lesion localisation in patients with neuroendocrine tumours. Twenty-seven patients were examined by multidetector CT. Additionally, as part of somatostatin receptor scintigraphy (SRS), an integrated SPECT-CT was performed. SPECT and CT data were fused using software with a registration algorithm based on normalised mutual information. The reliability of the topographic assignment of lesions in SPECT-CT, retrospective fusion and side-by-side analysis was evaluated by two blinded readers. Two patients were not enrolled in the final analysis because of misregistrations in the retrospective fusion. Eighty-seven foci were included in the analysis. For the anatomical assignment of foci, SPECT-CT and retrospective fusion revealed overall accuracies of 91 and 94% (side-by-side analysis 86%). The correct identification of foci as lymph node manifestations (n=25) was more accurate by retrospective fusion (88%) than from SPECT-CT images (76%) or by side-by-side analysis (60%). Both modalities of image fusion appear to be well suited for the localisation of SRS foci and are superior to side-by-side analysis of non-fused images especially concerning lymph node manifestations. (orig.)

  19. Evaluation of an Automated Analysis Tool for Prostate Cancer Prediction Using Multiparametric Magnetic Resonance Imaging.

    Directory of Open Access Journals (Sweden)

    Matthias C Roethke

    Full Text Available To evaluate the diagnostic performance of an automated analysis tool for the assessment of prostate cancer based on multiparametric magnetic resonance imaging (mpMRI of the prostate.A fully automated analysis tool was used for a retrospective analysis of mpMRI sets (T2-weighted, T1-weighted dynamic contrast-enhanced, and diffusion-weighted sequences. The software provided a malignancy prediction value for each image pixel, defined as Malignancy Attention Index (MAI that can be depicted as a colour map overlay on the original images. The malignancy maps were compared to histopathology derived from a combination of MRI-targeted and systematic transperineal MRI/TRUS-fusion biopsies.In total, mpMRI data of 45 patients were evaluated. With a sensitivity of 85.7% (with 95% CI of 65.4-95.0, a specificity of 87.5% (with 95% CI of 69.0-95.7 and a diagnostic accuracy of 86.7% (with 95% CI of 73.8-93.8 for detection of prostate cancer, the automated analysis results corresponded well with the reported diagnostic accuracies by human readers based on the PI-RADS system in the current literature.The study revealed comparable diagnostic accuracies for the detection of prostate cancer of a user-independent MAI-based automated analysis tool and PI-RADS-scoring-based human reader analysis of mpMRI. Thus, the analysis tool could serve as a detection support system for less experienced readers. The results of the study also suggest the potential of MAI-based analysis for advanced lesion assessments, such as cancer extent and staging prediction.

  20. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    Science.gov (United States)

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  1. Summary report for IAEA CRP on lifetime prediction for the first wall of a fusion machine (JAERI contribution)

    International Nuclear Information System (INIS)

    Suzuki, Satoshi; Araki, Masanori; Akiba, Masato

    1993-03-01

    IAEA Coordinated Research Program (CRP) on 'Lifetime Prediction for the First Wall of a Fusion Machine' was started in 1989. Five participants, Joint Research Centre (JRC-Ispra), The NET team, Kernforschungszentrum Karlsruhe (KfK), Russian Research Center and Japan Atomic Energy Research Institute, contributed in this activity. The purpose of the CRP is to evaluate the thermal fatigue behavior of the first wall of a next generation fusion machine by means of numerical methods and also to contribute the design activities for ITER (International Thermonuclear Experimental Reactor). Thermal fatigue experiments of a first wall mock-up which were carried out in JRC-Ispra were selected as a first benchmark exercise model. All participants performed finite element analyses with various analytical codes to predict the lifetime of the simulated first wall. The first benchmark exercise has successfully been finished in 1992. This report summarizes a JAERI's contribution for this first benchmark exercise. (author)

  2. Image processing with cellular nonlinear networks implemented on field-programmable gate arrays for real-time applications in nuclear fusion

    International Nuclear Information System (INIS)

    Palazzo, S.; Vagliasindi, G.; Arena, P.; Murari, A.; Mazon, D.; De Maack, A.

    2010-01-01

    In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested in an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.

  3. Imaging transient blood vessel fusion events in zebrafish by correlative volume electron microscopy.

    Directory of Open Access Journals (Sweden)

    Hannah E J Armer

    Full Text Available The study of biological processes has become increasingly reliant on obtaining high-resolution spatial and temporal data through imaging techniques. As researchers demand molecular resolution of cellular events in the context of whole organisms, correlation of non-invasive live-organism imaging with electron microscopy in complex three-dimensional samples becomes critical. The developing blood vessels of vertebrates form a highly complex network which cannot be imaged at high resolution using traditional methods. Here we show that the point of fusion between growing blood vessels of transgenic zebrafish, identified in live confocal microscopy, can subsequently be traced through the structure of the organism using Focused Ion Beam/Scanning Electron Microscopy (FIB/SEM and Serial Block Face/Scanning Electron Microscopy (SBF/SEM. The resulting data give unprecedented microanatomical detail of the zebrafish and, for the first time, allow visualization of the ultrastructure of a time-limited biological event within the context of a whole organism.

  4. A Visible and Passive Millimeter Wave Image Fusion Algorithm Based on Pulse-Coupled Neural Network in Tetrolet Domain for Early Risk Warning

    Directory of Open Access Journals (Sweden)

    Yuanjiang Li

    2018-01-01

    Full Text Available An algorithm based on pulse-coupled neural network (PCNN constructed in the Tetrolet transform domain is proposed for the fusion of the visible and passive millimeter wave images in order to effectively identify concealed targets. The Tetrolet transform is applied to build the framework of the multiscale decomposition due to its high sparse degree. Meanwhile, a Laplacian pyramid is used to decompose the low-pass band of the Tetrolet transform for improving the approximation performance. In addition, the maximum criterion based on regional average gradient is applied to fuse the top layers along with selecting the maximum absolute values of the other layers. Furthermore, an improved PCNN model is employed to enhance the contour feature of the hidden targets and obtain the fusion results of the high-pass band based on the firing time. Finally, the inverse transform of Tetrolet is exploited to obtain the fused results. Some objective evaluation indexes, such as information entropy, mutual information, and QAB/F, are adopted for evaluating the quality of the fused images. The experimental results show that the proposed algorithm is superior to other image fusion algorithms.

  5. Neurosurgical treatment of drug-resistant epilepsy on the basis of a fusion of MRI and SPECT images - case report

    International Nuclear Information System (INIS)

    Jurkiewicz, E.; Bekiesinska-Figatowska, M.; Misko, J.; Kaminska, A.; Kwiatkowski, S.; Terczynska, I.

    2010-01-01

    Background: Epilepsy concerns at least 0.5% of population and in most of the cases (approx. 70%) can be treated pharmacologically, which helps to prevent seizures. In all other patients, such a treatment does not produce the desired results. Their condition may require neurosurgical management. The aim of this work was to fuse anatomical MRI images and functional SPECT images in patients with drug resistant epilepsy, without structural changes on MRI or with changes so severe that it would be impossible to establish which ones are responsible for seizures. The authors presented a case of a child subjected to a neurosurgical procedure carried out on the basis of the fused MRI and SPECT images. Case Report: A seven-year-old boy with an extensive defect of the right hemisphere (cortical dysplasia with multiple balloon-like cells) operated on three times due to a history of treatment-resistant seizures present since the age of one. A subsequent MRI examination was performed with magnetic field intensity of 1.5 T, within a routine epilepsy protocol applying volumetric thin-slice T1-weighted images. Next, in the interictal period, a SPECT examination was performed with the use of the 99mT c-labelled ethyl cysteinate dimer ( 99mT cECD). For fusion and postprocessing, the following software was used: PMOD (Biomedical Image Quantification PMOD Technologies) with PFUS (Flexible Image Matching and Fusion Tool) and a program for a quantitative analysis of counts in the region of interest, so called VOI Constructor (Volume of Interest Constructor). On the basis of the fusion of images, the boy was subjected to the next operation procedure. The remaining fragments of the right frontal and parietal lobe adjacent to the occipital lobe were removed. Seizure remission was obtained and it was already 31 months long when we were writing this article. Conclusions: Owing to this multi-stage procedure, it was possible to avoid a total anatomical and functional hemispherectomy. This

  6. Semi-automated measurements of heart-to-mediastinum ratio on 123I-MIBG myocardial scintigrams by using image fusion method with chest X-ray images

    Science.gov (United States)

    Kawai, Ryosuke; Hara, Takeshi; Katafuchi, Tetsuro; Ishihara, Tadahiko; Zhou, Xiangrong; Muramatsu, Chisako; Abe, Yoshiteru; Fujita, Hiroshi

    2015-03-01

    MIBG (iodine-123-meta-iodobenzylguanidine) is a radioactive medicine that is used to help diagnose not only myocardial diseases but also Parkinson's diseases (PD) and dementia with Lewy Bodies (DLB). The difficulty of the segmentation around the myocardium often reduces the consistency of measurement results. One of the most common measurement methods is the ratio of the uptake values of the heart to mediastinum (H/M). This ratio will be a stable independent of the operators when the uptake value in the myocardium region is clearly higher than that in background, however, it will be unreliable indices when the myocardium region is unclear because of the low uptake values. This study aims to develop a new measurement method by using the image fusion of three modalities of MIBG scintigrams, 201-Tl scintigrams, and chest radiograms, to increase the reliability of the H/M measurement results. Our automated method consists of the following steps: (1) construct left ventricular (LV) map from 201-Tl myocardium image database, (2) determine heart region in chest radiograms, (3) determine mediastinum region in chest radiograms, (4) perform image fusion of chest radiograms and MIBG scintigrams, and 5) perform H/M measurements on MIBG scintigrams by using the locations of heart and mediastinum determined on the chest radiograms. We collected 165 cases with 201-Tl scintigrams and chest radiograms to construct the LV map. Another 65 cases with MIBG scintigrams and chest radiograms were also collected for the measurements. Four radiological technologists (RTs) manually measured the H/M in the MIBG images. We compared the four RTs' results with our computer outputs by using Pearson's correlation, the Bland-Altman method, and the equivalency test method. As a result, the correlations of the H/M between four the RTs and the computer were 0.85 to 0.88. We confirmed systematic errors between the four RTs and the computer as well as among the four RTs. The variation range of the H

  7. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    Science.gov (United States)

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Assessment of anatomic relation between pulmonary perfusion and morphology in pulmonary emphysema with breath-hold SPECT-CT fusion images

    International Nuclear Information System (INIS)

    Suga, Kazuyoshi; Kawakami, Yasuhiko; Iwanaga, Hideyuki; Hayashi, Noriko; Seto, Akiko; Matsunaga, Naofumi

    2008-01-01

    Anatomic relation between pulmonary perfusion and morphology in pulmonary emphysema was assessed on deep-inspiratory breath-hold (DIBrH) perfusion single-photon emission computed tomography (SPECT)-CT fusion images. Subjects were 38 patients with pulmonary emphysema and 11 non-smoker controls, who successfully underwent DIBrH and non-BrH perfusion SPECT using a dual-headed SPECT system during the period between January 2004 and June 2006. DIBrH SPECT was three-dimensionally co-registered with DIBrH CT to comprehend the relationship between lung perfusion defects and CT low attenuation areas (LAA). By comparing the appearance of lung perfusion on DIBrH with non-BrH SPECT, the correlation with the rate constant for the alveolar-capillary transfer of carbon monoxide (DLCO/VA) was compared between perfusion abnormalities on these SPECTs and LAA on CT. DIBrH SPECT provided fairly uniform perfusion in controls, but significantly enhanced perfusion heterogeneity when compared with non-BrH SPECT in pulmonary emphysema patients (P<0.001). The reliable DIBrH SPECT-CT fusion images confirmed more extended perfusion defects than LAA on CT in majority (73%) of patients. Perfusion abnormalities on DIBrH SPECT were more closely correlated with DLCO/VA than LAA on CT (P<0.05). DIBrH SPECT identifies affected lungs with perfusion abnormality better than does non-BrH SPECT in pulmonary emphysema. DIBrH SPECT-CT fusion images are useful for more accurately localizing affected lungs than morphologic CT alone in this disease. (author)

  9. Bubble fusion: Preliminary estimates

    International Nuclear Information System (INIS)

    Krakowski, R.A.

    1995-01-01

    The collapse of a gas-filled bubble in disequilibrium (i.e., internal pressure much-lt external pressure) can occur with a significant focusing of energy onto the entrapped gas in the form of pressure-volume work and/or acoustical shocks; the resulting heating can be sufficient to cause ionization and the emission of atomic radiations. The suggestion that extreme conditions necessary for thermonuclear fusion to occur may be possible has been examined parametrically in terms of the ratio of initial bubble pressure relative to that required for equilibrium. In this sense, the disequilibrium bubble is viewed as a three-dimensional ''sling shot'' that is ''loaded'' to an extent allowed by the maximum level of disequilibrium that can stably be achieved. Values of this disequilibrium ratio in the range 10 -5 --10 -6 are predicted by an idealized bubble-dynamics model as necessary to achieve conditions where nuclear fusion of deuterium-tritium might be observed. Harmonic and aharmonic pressurizations/decompressions are examined as means to achieve the required levels of disequilibrium required to create fusion conditions. A number of phenomena not included in the analysis reported herein could enhance or reduce the small levels of nuclear fusions predicted

  10. Non-contrast magnetic resonance imaging for bladder cancer: fused high b value diffusion-weighted imaging and T2-weighted imaging helps evaluate depth of invasion

    International Nuclear Information System (INIS)

    Lee, Minsu; Oh, Young Taik; Jung, Dae Chul; Park, Sung Yoon; Shin, Su-Jin; Cho, Nam Hoon; Choi, Young Deuk

    2017-01-01

    To investigate the utility of fused high b value diffusion-weighted imaging (DWI) and T2-weighted imaging (T2WI) for evaluating depth of invasion in bladder cancer. We included 62 patients with magnetic resonance imaging (MRI) and surgically confirmed urothelial carcinoma in the urinary bladder. An experienced genitourinary radiologist analysed the depth of invasion (T stage <2 or ≥2) using T2WI, DWI, T2WI plus DWI, and fused DWI and T2WI (fusion MRI). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy were investigated. Area under the curve (AUC) was analysed to identify T stage ≥2. The rate of patients with surgically confirmed T stage ≥2 was 41.9% (26/62). Sensitivity, specificity, PPV, NPV and accuracy were 50.0%, 55.6%, 44.8%, 60.6% and 53.2%, respectively, with T2WI; 57.7%, 77.8%, 65.2%, 71.8% and 69.4%, respectively, with DWI; 65.4%, 80.6%, 70.8%, 76.3% and 74.2%, respectively, with T2WI plus DWI and 80.8%, 77.8%, 72.4%, 84.9% and 79.0%, respectively, with fusion MRI. AUC was 0.528 with T2WI, 0.677 with DWI, 0.730 with T2WI plus DWI and 0.793 with fusion MRI for T stage ≥2. Fused high b value DWI and T2WI may be a promising non-contrast MRI technique for assessing depth of invasion in bladder cancer. (orig.)

  11. Non-contrast magnetic resonance imaging for bladder cancer: fused high b value diffusion-weighted imaging and T2-weighted imaging helps evaluate depth of invasion

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Minsu; Oh, Young Taik; Jung, Dae Chul; Park, Sung Yoon [Yonsei University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Shin, Su-Jin [Yonsei University College of Medicine, Department of Pathology, Seoul (Korea, Republic of); Hanyang University College of Medicine, Department of Pathology, Seoul (Korea, Republic of); Cho, Nam Hoon [Yonsei University College of Medicine, Department of Pathology, Seoul (Korea, Republic of); Choi, Young Deuk [Yonsei University College of Medicine, Department of Urology, Seoul (Korea, Republic of)

    2017-09-15

    To investigate the utility of fused high b value diffusion-weighted imaging (DWI) and T2-weighted imaging (T2WI) for evaluating depth of invasion in bladder cancer. We included 62 patients with magnetic resonance imaging (MRI) and surgically confirmed urothelial carcinoma in the urinary bladder. An experienced genitourinary radiologist analysed the depth of invasion (T stage <2 or ≥2) using T2WI, DWI, T2WI plus DWI, and fused DWI and T2WI (fusion MRI). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy were investigated. Area under the curve (AUC) was analysed to identify T stage ≥2. The rate of patients with surgically confirmed T stage ≥2 was 41.9% (26/62). Sensitivity, specificity, PPV, NPV and accuracy were 50.0%, 55.6%, 44.8%, 60.6% and 53.2%, respectively, with T2WI; 57.7%, 77.8%, 65.2%, 71.8% and 69.4%, respectively, with DWI; 65.4%, 80.6%, 70.8%, 76.3% and 74.2%, respectively, with T2WI plus DWI and 80.8%, 77.8%, 72.4%, 84.9% and 79.0%, respectively, with fusion MRI. AUC was 0.528 with T2WI, 0.677 with DWI, 0.730 with T2WI plus DWI and 0.793 with fusion MRI for T stage ≥2. Fused high b value DWI and T2WI may be a promising non-contrast MRI technique for assessing depth of invasion in bladder cancer. (orig.)

  12. A New Developed GIHS-BT-SFIM Fusion Method Based On Edge and Class Data

    Directory of Open Access Journals (Sweden)

    S. Dehnavi

    2013-09-01

    Full Text Available The objective of image fusion (or sometimes pan sharpening is to produce a single image containing the best aspects of the source images. Some desirable aspects are high spatial resolution and high spectral resolution. With the development of space borne imaging sensors, a unified image fusion approach suitable for all employed imaging sources becomes necessary. Among various image fusion methods, intensity-hue-saturation (IHS and Brovey Transforms (BT can quickly merge huge amounts of imagery. However they often face color distortion problems with fused images. The SFIM fusion is one of the most frequently employed approaches in practice to control the tradeoff between the spatial and spectral information. In addition it preserves more spectral information but suffer more spatial information loss. Its effectiveness is heavily depends on the filter design. In this work, two modifications were tested to improve the spectral quality of the images and also investigating class-based fusion results. First, a Generalized Intensity-Hue-Saturation (GIHS, Brovey Transform (BT and smoothing-filter based intensity modulation (SFIM approach was implemented. This kind of algorithm has shown computational advantages among other fusion methods like wavelet, and can be extended to different number of bands as in literature discussed. The used IHS-BT-SFIM algorithm incorporates IHS, IHS-BT, BT, BT-SFIM and SFIM methods by two adjustable parameters. Second, a method was proposed to plus edge information in previous GIHS_BT_SFIM and edge enhancement by panchromatic image. Adding panchromatic data to images had no much improvement. Third, an edge adaptive GIHS_BT_SFIM was proposed to enforce fidelity away from the edges. Using MS image off edges has shown spectral improvement in some fusion methods. Fourth, a class based fusion was tested, which tests different coefficients for each method due to its class. The best parameters for vegetated areas was k1 = 0.6, k2

  13. 128 MULTIPLE CERVICAL VERTEBRAL FUSION WITH ...

    African Journals Online (AJOL)

    GARGI

    Fusions of all zygapophyseal joints were observed. The CT image of the specimen confirmed the ossification of the anterior longitudinal ligament with mild calcification of intervertebral discs. With the above features and bony ankylosis of articular facets, it was concluded that this fusion might be due to ankylosing spondylitis.

  14. Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process

    Science.gov (United States)

    Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.

    2017-12-01

    Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.

  15. Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process

    Science.gov (United States)

    Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.

    2018-05-01

    Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.

  16. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  17. Role of magnetic resonance urography in pediatric renal fusion anomalies

    International Nuclear Information System (INIS)

    Chan, Sherwin S.; Ntoulia, Aikaterini; Khrichenko, Dmitry; Back, Susan J.; Darge, Kassa; Tasian, Gregory E.; Dillman, Jonathan R.

    2017-01-01

    Renal fusion is on a spectrum of congenital abnormalities that occur due to disruption of the migration process of the embryonic kidneys from the pelvis to the retroperitoneal renal fossae. Clinically, renal fusion anomalies are often found incidentally and associated with increased risk for complications, such as urinary tract obstruction, infection and urolithiasis. These anomalies are most commonly imaged using ultrasound for anatomical definition and less frequently using renal scintigraphy to quantify differential renal function and assess urinary tract drainage. Functional magnetic resonance urography (fMRU) is an advanced imaging technique that combines the excellent soft-tissue contrast of conventional magnetic resonance (MR) images with the quantitative assessment based on contrast medium uptake and excretion kinetics to provide information on renal function and drainage. fMRU has been shown to be clinically useful in evaluating a number of urological conditions. A highly sensitive and radiation-free imaging modality, fMRU can provide detailed morphological and functional information that can facilitate conservative and/or surgical management of children with renal fusion anomalies. This paper reviews the embryological basis of the different types of renal fusion anomalies, their imaging appearances at fMRU, complications associated with fusion anomalies, and the important role of fMRU in diagnosing and managing children with these anomalies. (orig.)

  18. Role of magnetic resonance urography in pediatric renal fusion anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Sherwin S. [Children' s Mercy Hospital, Department of Radiology, Kansas City, MO (United States); Ntoulia, Aikaterini; Khrichenko, Dmitry [The Children' s Hospital of Philadelphia, Division of Body Imaging, Department of Radiology, Philadelphia, PA (United States); Back, Susan J.; Darge, Kassa [The Children' s Hospital of Philadelphia, Division of Body Imaging, Department of Radiology, Philadelphia, PA (United States); University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA (United States); Tasian, Gregory E. [University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA (United States); The Children' s Hospital of Philadelphia, Division of Urology, Department of Surgery, Philadelphia, PA (United States); Dillman, Jonathan R. [Cincinnati Children' s Hospital Medical Center, Division of Thoracoabdominal Imaging, Department of Radiology, Cincinnati, OH (United States)

    2017-12-15

    Renal fusion is on a spectrum of congenital abnormalities that occur due to disruption of the migration process of the embryonic kidneys from the pelvis to the retroperitoneal renal fossae. Clinically, renal fusion anomalies are often found incidentally and associated with increased risk for complications, such as urinary tract obstruction, infection and urolithiasis. These anomalies are most commonly imaged using ultrasound for anatomical definition and less frequently using renal scintigraphy to quantify differential renal function and assess urinary tract drainage. Functional magnetic resonance urography (fMRU) is an advanced imaging technique that combines the excellent soft-tissue contrast of conventional magnetic resonance (MR) images with the quantitative assessment based on contrast medium uptake and excretion kinetics to provide information on renal function and drainage. fMRU has been shown to be clinically useful in evaluating a number of urological conditions. A highly sensitive and radiation-free imaging modality, fMRU can provide detailed morphological and functional information that can facilitate conservative and/or surgical management of children with renal fusion anomalies. This paper reviews the embryological basis of the different types of renal fusion anomalies, their imaging appearances at fMRU, complications associated with fusion anomalies, and the important role of fMRU in diagnosing and managing children with these anomalies. (orig.)

  19. In-Service Design and Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

    International Nuclear Information System (INIS)

    G. R. Odette; G. E. Lucas

    2005-01-01

    This final report on ''In-Service Design and Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation'' (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: (1) A Transport and Fate Model for Helium and Helium Management; (2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; (3) Multiscale Modeling of Fracture consisting of: (3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), (3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, (3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, (3d) A Model for the KJc(T) of a High Strength NFA MA957, (3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, (3f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; (4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and (5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES

  20. A Standard Mammography Unit - Standard 3D Ultrasound Probe Fusion Prototype: First Results.

    Science.gov (United States)

    Schulz-Wendtland, Rüdiger; Jud, Sebastian M; Fasching, Peter A; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W; Emons, Julius

    2017-06-01

    The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound - the second important imaging modality in complementary breast diagnostics - without increasing examination time or requiring additional staff.

  1. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    International Nuclear Information System (INIS)

    Wang, Yan; Zhou, Jiliu; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Shen, Dinggang; Wu, Xi; Lalush, David S; Lin, Weili

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. (paper)

  2. Modeling and Prediction of Coal Ash Fusion Temperature based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Miao Suzhen

    2016-01-01

    Full Text Available Coal ash is the residual generated from combustion of coal. The ash fusion temperature (AFT of coal gives detail information on the suitability of a coal source for gasification procedures, and specifically to which extent ash agglomeration or clinkering is likely to occur within the gasifier. To investigate the contribution of oxides in coal ash to AFT, data of coal ash chemical compositions and Softening Temperature (ST in different regions of China were collected in this work and a BP neural network model was established by XD-APC PLATFORM. In the BP model, the inputs were the ash compositions and the output was the ST. In addition, the ash fusion temperature prediction model was obtained by industrial data and the model was generalized by different industrial data. Compared to empirical formulas, the BP neural network obtained better results. By different tests, the best result and the best configurations for the model were obtained: hidden layer nodes of the BP network was setted as three, the component contents (SiO2, Al2O3, Fe2O3, CaO, MgO were used as inputs and ST was used as output of the model.

  3. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    Science.gov (United States)

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis

  4. Image registration/fusion software for PET and CT/MRI by using simultaneous emission and transmission scans

    International Nuclear Information System (INIS)

    Kitamura, Keishi; Amano, Masaharu; Sato, Tomohiko; Okumura, Takeshi; Konishi, Norihiro; Komatsu, Masahiko

    2003-01-01

    When PET (positron emission tomography) is used for oncology studies, it is important to register and over-lay PET images with the images of other anatomical modalities, such as those obtained by CT (computed tomography) or MRI (magnetic resonance imaging), in order for the lesions to be anatomically located with high accuracy. The Shimadzu SET-2000W Series PET scanners provide simultaneous acquisition of emission and transmission data, which is capable of complete spatial alignment of both functional and attenuation images. This report describes our newly developed image registration/fusion software, which reformats PET emission images to the CT/MRI grid by using the transform matrix obtained by matching PET transmission images with CT/MRI images. Transmission images are registered and fused either automatically or manually, through 3-dimensional rotation and translation, with the transaxial, sagittal, and coronal fused images being monitored on the screen. This new method permits sufficiently accurate registration and efficient data processing with promoting effective use of CT/MRI images of the DICOM format, without using markers in data acquisition or any special equipment, such as a combined PET/CT scanner. (author)

  5. Application of spatially resolved high resolution crystal spectrometry to inertial confinement fusion plasmas.

    Science.gov (United States)

    Hill, K W; Bitter, M; Delgado-Aparacio, L; Pablant, N A; Beiersdorfer, P; Schneider, M; Widmann, K; Sanchez del Rio, M; Zhang, L

    2012-10-01

    High resolution (λ∕Δλ ∼ 10 000) 1D imaging x-ray spectroscopy using a spherically bent crystal and a 2D hybrid pixel array detector is used world wide for Doppler measurements of ion-temperature and plasma flow-velocity profiles in magnetic confinement fusion plasmas. Meter sized plasmas are diagnosed with cm spatial resolution and 10 ms time resolution. This concept can also be used as a diagnostic of small sources, such as inertial confinement fusion plasmas and targets on x-ray light source beam lines, with spatial resolution of micrometers, as demonstrated by laboratory experiments using a 250-μm (55)Fe source, and by ray-tracing calculations. Throughput calculations agree with measurements, and predict detector counts in the range 10(-8)-10(-6) times source x-rays, depending on crystal reflectivity and spectrometer geometry. Results of the lab demonstrations, application of the technique to the National Ignition Facility (NIF), and predictions of performance on NIF will be presented.

  6. Unmanned Aerial System (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine

    Science.gov (United States)

    Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix

    2017-12-01

    Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was

  7. Report of the Fusion Energy Sciences Advisory Committee. Panel on Integrated Simulation and Optimization of Magnetic Fusion Systems

    International Nuclear Information System (INIS)

    Dahlburg, Jill; Corones, James; Batchelor, Donald; Bramley, Randall; Greenwald, Martin; Jardin, Stephen; Krasheninnikov, Sergei; Laub, Alan; Leboeuf, Jean-Noel; Lindl, John; Lokke, William; Rosenbluth, Marshall; Ross, David; Schnack, Dalton

    2002-01-01

    Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individual features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world's energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the

  8. Report of the Fusion Energy Sciences Advisory Committee. Panel on Integrated Simulation and Optimization of Magnetic Fusion Systems

    Energy Technology Data Exchange (ETDEWEB)

    Dahlburg, Jill [General Atomics, San Diego, CA (United States); Corones, James [Krell Inst., Ames, IA (United States); Batchelor, Donald [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bramley, Randall [Indiana Univ., Bloomington, IN (United States); Greenwald, Martin [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Jardin, Stephen [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Krasheninnikov, Sergei [Univ. of California, San Diego, CA (United States); Laub, Alan [Univ. of California, Davis, CA (United States); Leboeuf, Jean-Noel [Univ. of California, Los Angeles, CA (United States); Lindl, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lokke, William [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rosenbluth, Marshall [Univ. of California, San Diego, CA (United States); Ross, David [Univ. of Texas, Austin, TX (United States); Schnack, Dalton [Science Applications International Corporation, Oak Ridge, TN (United States)

    2002-11-01

    Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individual features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC

  9. Matrix factorization-based data fusion for the prediction of lncRNA-disease associations.

    Science.gov (United States)

    Fu, Guangyuan; Wang, Jun; Domeniconi, Carlotta; Yu, Guoxian

    2018-05-01

    Long non-coding RNAs (lncRNAs) play crucial roles in complex disease diagnosis, prognosis, prevention and treatment, but only a small portion of lncRNA-disease associations have been experimentally verified. Various computational models have been proposed to identify lncRNA-disease associations by integrating heterogeneous data sources. However, existing models generally ignore the intrinsic structure of data sources or treat them as equally relevant, while they may not be. To accurately identify lncRNA-disease associations, we propose a Matrix Factorization based LncRNA-Disease Association prediction model (MFLDA in short). MFLDA decomposes data matrices of heterogeneous data sources into low-rank matrices via matrix tri-factorization to explore and exploit their intrinsic and shared structure. MFLDA can select and integrate the data sources by assigning different weights to them. An iterative solution is further introduced to simultaneously optimize the weights and low-rank matrices. Next, MFLDA uses the optimized low-rank matrices to reconstruct the lncRNA-disease association matrix and thus to identify potential associations. In 5-fold cross validation experiments to identify verified lncRNA-disease associations, MFLDA achieves an area under the receiver operating characteristic curve (AUC) of 0.7408, at least 3% higher than those given by state-of-the-art data fusion based computational models. An empirical study on identifying masked lncRNA-disease associations again shows that MFLDA can identify potential associations more accurately than competing models. A case study on identifying lncRNAs associated with breast, lung and stomach cancers show that 38 out of 45 (84%) associations predicted by MFLDA are supported by recent biomedical literature and further proves the capability of MFLDA in identifying novel lncRNA-disease associations. MFLDA is a general data fusion framework, and as such it can be adopted to predict associations between other biological

  10. Dynamic in vivo imaging and cell tracking using a histone fluorescent protein fusion in mice

    Directory of Open Access Journals (Sweden)

    Papaioannou Virginia E

    2004-12-01

    Full Text Available Abstract Background Advances in optical imaging modalities and the continued evolution of genetically-encoded fluorescent proteins are coming together to facilitate the study of cell behavior at high resolution in living organisms. As a result, imaging using autofluorescent protein reporters is gaining popularity in mouse transgenic and targeted mutagenesis applications. Results We have used embryonic stem cell-mediated transgenesis to label cells at sub-cellular resolution in vivo, and to evaluate fusion of a human histone protein to green fluorescent protein for ubiquitous fluorescent labeling of nucleosomes in mice. To this end we have generated embryonic stem cells and a corresponding strain of mice that is viable and fertile and exhibits widespread chromatin-localized reporter expression. High levels of transgene expression are maintained in a constitutive manner. Viability and fertility of homozygous transgenic animals demonstrates that this reporter is developmentally neutral and does not interfere with mitosis or meiosis. Conclusions Using various optical imaging modalities including wide-field, spinning disc confocal, and laser scanning confocal and multiphoton excitation microscopy, we can identify cells in various stages of the cell cycle. We can identify cells in interphase, cells undergoing mitosis or cell death. We demonstrate that this histone fusion reporter allows the direct visualization of active chromatin in situ. Since this reporter segments three-dimensional space, it permits the visualization of individual cells within a population, and so facilitates tracking cell position over time. It is therefore attractive for use in multidimensional studies of in vivo cell behavior and cell fate.

  11. Clinical assessment of CT-MRI image fusion software in localization of the prostate for 3D conformal radiation therapy

    International Nuclear Information System (INIS)

    Kagawa, Kazufumi; Lee, W. Robert; Schultheiss, Timothy E.; Hunt, Margie A.; Shaer, Andrew H.; Hanks, Gerald E.

    1996-01-01

    Purpose: To assess the utility of image fusion software and compare MRI prostate localization with CT localization in patients undergoing 3D conformal radiation therapy of prostate cancer. Materials and Methods: After a phantom study was performed to ensure the accuracy of image fusion procedure, 22 prostate cancer patients had CT and MRI studies before the start of radiotherapy. Immobilization casts used during radiation treatment were also used for both imaging studies. After the clinical target volume (CTV) (prostate or prostate + seminal vesicles) was defined on CT, slices from MRI study were reconstructed to match precisely the corresponding CT slices by identifying three common bony landmarks on each study. The CTV was separately defined on the matched MRI slices. Data related to the size and location of the prostate were compared between CT and MRI. The spatial relationship between the tip of urethrogram cone on CT and prostate apex seen on MRI was also scrutinized. Results: The phantom study showed the registration discrepancies between CT and MRI smaller than 1.0 mm in any pair of comparison. The patient study showed mean image registration error of 0.9 (± 0.6) mm. The average prostate volume was 63.0 (± 25.8) cm 3 and 50.9 (± 22.9) cm 3 determined by CT and MRI respectively (Fig. 1). The difference in prostate location with the two studies most commonly differed at the base and at the apex of the prostate (Fig. 2). On transverse MRI, the prostate apex was situated 7.1 (± 4.5) mm dorsal and 15.1 (± 4.0) mm cephalad to the tip of urethrogram cone (Fig. 3). Conclusions: CT-MRI image fusion study made it possible to compare the two modalities directly. MRI localization of the prostate is more accurate than CT, and indicates the distance from cone to apex is 15 mm. In view of excellent treatment results obtained with current CT localization of the prostate, still it may not be wise to reduce target volume to that demonstrated on MRI

  12. Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion

    Science.gov (United States)

    Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei

    2018-06-01

    Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.

  13. Cold fusion in symmetric 90Zr induced reactions

    International Nuclear Information System (INIS)

    Keller, J.G.; Schmidt, K.H.; Hessberger, F.P.; Muenzenberg, G.; Reisdorf, W.; Clerc, H.G.; Sahm, C.C.

    1985-08-01

    Excitation functions for evaporation residues were measured for the reactions 90 Zr+ 89 Y, 90 Zr, 92 Zr, 96 Zr, and 94 Mo. Deexcitation only by γ radiation was found for the compound nuclei 179 Au, 180 Hg, 182 Hg, and 184 Pb. The cross sections for this process were found to be considerably larger than predicted by a statistical-model calculation using standard parameters for the γ-strength function. Fusion probabilities as well as fusion-barrier distributions were deduced from the measured cross sections. There are strong nuclear structure effects in subbarrier fusion. For energies far below the fusion barrier the increase of the fusion probabilities with increasing energy is found to be much steeper than predicted by WKB calculations. As a by-product of this work new α-spectroscopic information could be obtained for neutron deficient isotopes between Ir and Pb. (orig.)

  14. Spheno-Occipital Synchondrosis Fusion Correlates with Cervical Vertebrae Maturation.

    Directory of Open Access Journals (Sweden)

    María José Fernández-Pérez

    Full Text Available The aim of this study was to determine the relationship between the closure stage of the spheno-occipital synchondrosis and the maturational stage of the cervical vertebrae (CVM in growing and young adult subjects using cone beam computed tomography (CBCT. CBCT images with an extended field of view obtained from 315 participants (148 females and 167 males; mean age 15.6 ±7.3 years; range 6 to 23 years were analyzed. The fusion status of the synchondrosis was determined using a five-stage scoring system; the vertebral maturational status was evaluated using a six-stage stratification (CVM method. Ordinal regression was used to study the ability of the synchondrosis stage to predict the vertebral maturation stage. Vertebrae and synchondrosis had a strong significant correlation (r = 0.89 that essential was similar for females (r = 0.88 and males (r = 0.89. CVM stage could be accurately predicted from synchondrosis stage by ordinal regression models. Prediction equations of the vertebral stage using synchondrosis stage, sex and biological age as predictors were developed. Thus this investigation demonstrated that the stage of spheno-occipital synchondrosis, as determined in CBCT images, is a reasonable indicator of growth maturation.

  15. Spheno-Occipital Synchondrosis Fusion Correlates with Cervical Vertebrae Maturation.

    Science.gov (United States)

    Fernández-Pérez, María José; Alarcón, José Antonio; McNamara, James A; Velasco-Torres, Miguel; Benavides, Erika; Galindo-Moreno, Pablo; Catena, Andrés

    2016-01-01

    The aim of this study was to determine the relationship between the closure stage of the spheno-occipital synchondrosis and the maturational stage of the cervical vertebrae (CVM) in growing and young adult subjects using cone beam computed tomography (CBCT). CBCT images with an extended field of view obtained from 315 participants (148 females and 167 males; mean age 15.6 ±7.3 years; range 6 to 23 years) were analyzed. The fusion status of the synchondrosis was determined using a five-stage scoring system; the vertebral maturational status was evaluated using a six-stage stratification (CVM method). Ordinal regression was used to study the ability of the synchondrosis stage to predict the vertebral maturation stage. Vertebrae and synchondrosis had a strong significant correlation (r = 0.89) that essential was similar for females (r = 0.88) and males (r = 0.89). CVM stage could be accurately predicted from synchondrosis stage by ordinal regression models. Prediction equations of the vertebral stage using synchondrosis stage, sex and biological age as predictors were developed. Thus this investigation demonstrated that the stage of spheno-occipital synchondrosis, as determined in CBCT images, is a reasonable indicator of growth maturation.

  16. Three-dimensional reconstructed computed tomography-magnetic resonance fusion image-based preoperative planning for surgical procedures for spinal lipoma or tethered spinal cord after myelomeningocele repair. Technical note

    International Nuclear Information System (INIS)

    Bamba, Yohei; Nonaka, Masahiro; Nakajima, Shin; Yamasaki, Mami

    2011-01-01

    Surgical procedures for spinal lipoma or tethered spinal cord after myelomeningocele (MMC) repair are often difficult and complicated, because the anatomical structures can be deformed in complex and unpredictable ways. Imaging helps the surgeon understand the patient's spinal anatomy. Whereas two-dimensional images provide only limited information for surgical planning, three-dimensional (3D) reconstructed computed tomography (CT)-magnetic resonance (MR) fusion images produce clearer representations of the spinal regions. Here we describe simple and quick methods for obtaining 3D reconstructed CT-MR fusion images for preoperative planning of surgical procedures using the iPlan cranial (BrainLAB AG, Feldkirchen, Germany) neuronavigation software. 3D CT images of the vertebral bone were combined with heavily T 2 -weighted MR images of the spinal cord, lipoma, cerebrospinal fluid (CSF) space, and nerve root through a process of fusion, segmentation, and reconstruction of the 3D images. We also used our procedure called 'Image Overlay' to directly project the 3D reconstructed image onto the body surface using an light emitting diode (LED) projector. The final reconstructed 3D images took 10-30 minutes to obtain, and provided the surgeon with a representation of the individual pathological structures, so enabled the design of effective surgical plans, even in patients with bony deformity such as scoliosis. None of the 19 patients treated based on our 3D reconstruction method has had neurological complications, except for CSF leakage. This 3D reconstructed imaging method, combined with Image Overlay, improves the visual understanding of complicated surgical situations, and should improve surgical efficiency and outcome. (author)

  17. The fusion of heavy ions in an interaction potential model

    International Nuclear Information System (INIS)

    Zipper, W.

    1980-01-01

    The paper contains the problems connected with fusion processes in heavy ions collision. Results of experimental fusion data for reactions: 9 Be + 12 C, 6 Li + 28 Si, 9 Be + 28 Si, 12 C + 28 Si, 12 C + 16 O and 16 O + 16 O are presented. Comparison of measured fusion cross sections with predictions of the fusion potential model have been made. The validity of this model for both light systems, like 9 Be + 12 C and heavy systems, like 35 Cl + 62 Ni, have been discussed. In conclusion, it should be stated that fusion cross sections could be correctly predicted by the potential model with a potential describing the elastic scattering data. (author)

  18. Additive effects on the energy barrier for synaptic vesicle fusion cause supralinear effects on the vesicle fusion rate

    DEFF Research Database (Denmark)

    Schotten, Sebastiaan; Meijer, Marieke; Walter, Alexander Matthias

    2015-01-01

    supralinear effects on the fusion rate. To test this prediction experimentally, we developed a method to assess the number of releasable vesicles, rate constants for vesicle priming, unpriming, and fusion, and the activation energy for fusion by fitting a vesicle state model to synaptic responses induced......-linear effects of genetic/pharmacological perturbations on synaptic transmission and a novel interpretation of the cooperative nature of Ca2+-dependent release....

  19. Numerical models for the prediction of failure for multilayer fusion Al-alloy sheets

    International Nuclear Information System (INIS)

    Gorji, Maysam; Berisha, Bekim; Hora, Pavel; Timm, Jürgen

    2013-01-01

    Initiation and propagation of cracks in monolithic and multi-layer aluminum alloys, called “Fusion”, is investigated. 2D plane strain finite element simulations are performed to model deformation due to bending and to predict failure. For this purpose, fracture strains are measured based on microscopic pictures of Nakajima specimens. In addition to, micro-structure of materials is taken into account by introducing a random grain distribution over the sheet thickness as well as a random distribution of the measured yield curve. It is shown that the performed experiments and the introduced FE-Model are appropriate methods to highlight the advantages of the Fusion material, especially for bending processes

  20. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.

    Science.gov (United States)

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  1. Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    Directory of Open Access Journals (Sweden)

    Arturo de la Escalera

    2010-08-01

    Full Text Available The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem and dense disparity maps and u-v disparity (vision subsystem. Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  2. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging.

    Science.gov (United States)

    Choi, Lark Kwon; You, Jaehee; Bovik, Alan Conrad

    2015-11-01

    We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: http://live.ece.utexas.edu/research/fog/index.html.

  3. Thermofluid experiments for Fusion Reactor Safety. Visualization of exchange flows through breaches of a vacuum vessel in a fusion reactor under the LOVA condition

    International Nuclear Information System (INIS)

    Fujii, Sadao; Shibazaki, Hiroaki; Takase, Kazuyuki; Kunugi, Tomoaki.

    1997-01-01

    Exchange flow rates through breaches of a vacuum vessel in a fusion reactor under the LOVA (Loss of VAcuum event) conditions were measured quantitatively by using a preliminary LOVA apparatus and exchange flow patterns over the breach were visualized qualitatively by smoke. Velocity distributions in the exchange flows were predicted from the observed flow patterns by using the correlation method in the flow visualization procedures. Mean velocities calculated from the predicted velocity distributions at the outside of the breach were in good agreement with the LOVA experimental results when the exchange flow velocities were low. It was found that the present flow visualization and the image processing system might be an useful procedure to evaluate the exchange flow rates. (author)

  4. Complimentary Advanced Fusion Exploration

    National Research Council Canada - National Science Library

    Alford, Mark G; Jones, Eric C; Bubalo, Adnan; Neumann, Melissa; Greer, Michael J

    2005-01-01

    .... The focus areas were in the following regimes: multi-tensor homographic computer vision image fusion, out-of-sequence measurement and track data handling, Nash bargaining approaches to sensor management, pursuit-evasion game theoretic modeling...

  5. Prediction of surgical view of neurovascular decompression using interactive computer graphics.

    Science.gov (United States)

    Kin, Taichi; Oyama, Hiroshi; Kamada, Kyousuke; Aoki, Shigeki; Ohtomo, Kuni; Saito, Nobuhito

    2009-07-01

    To assess the value of an interactive visualization method for detecting the offending vessels in neurovascular compression syndrome in patients with facial spasm and trigeminal neuralgia. Computer graphics models are created by fusion of fast imaging employing steady-state acquisition and magnetic resonance angiography. High-resolution magnetic resonance angiography and fast imaging employing steady-state acquisition were performed preoperatively in 17 patients with neurovascular compression syndromes (facial spasm, n = 10; trigeminal neuralgia, n = 7) using a 3.0-T magnetic resonance imaging scanner. Computer graphics models were created with computer software and observed interactively for detection of offending vessels by rotation, enlargement, reduction, and retraction on a graphic workstation. Two-dimensional images were reviewed by 2 radiologists blinded to the clinical details, and 2 neurosurgeons predicted the offending vessel with the interactive visualization method before surgery. Predictions from the 2 imaging approaches were compared with surgical findings. The vessels identified during surgery were assumed to be the true offending vessels. Offending vessels were identified correctly in 16 of 17 patients (94%) using the interactive visualization method and in 10 of 17 patients using 2-dimensional images. These data demonstrated a significant difference (P = 0.015 by Fisher's exact method). The interactive visualization method data corresponded well with surgical findings (surgical field, offending vessels, and nerves). Virtual reality 3-dimensional computer graphics using fusion magnetic resonance angiography and fast imaging employing steady-state acquisition may be helpful for preoperative simulation.

  6. Predicting tritium movement and inventory in fusion reactor subsystems using the TMAP code

    International Nuclear Information System (INIS)

    Jones, J.L.; Merrill, B.J.; Holland, D.F.

    1985-01-01

    The Fusion Safety Program of EG and G Idaho, Inc. at the Idaho National Engineering Laboratory (INEL) is developing a safety analysis code called TMAP (Tritium Migration Analysis Program) to analyze tritium loss from fusion systems during normal and off-normal conditions. TMAP is a one-dimensional code that calculated tritium movement and inventories in a system of interconnected enclosures and wall structures. These wall structures can include composite materials with bulk trapping of the permeating tritium on impurities or radiation induced dislocations within the material. The thermal response of a structure can be modeled to provide temperature information required for tritium movement calculations. Chemical reactions and hydrogen isotope movement can also be included in the calculations. TWAP was used to analyze the movement of tritium implanted into a proposed limiter/first wall structure design. This structure was composed of composite layers of vanadium and stainless steel. Included in these calculations was the effect of contrasting material tritium solubility at the composite interface. In addition, TMAP was used to investigate the rate of tritium cleanup after an accidental release into the atmosphere of a reactor building. Tritium retention and release from surfaces and conversion to the oxide form was predicted

  7. Image Fusion Applied to Satellite Imagery for the Improved Mapping and Monitoring of Coral Reefs: a Proposal

    Science.gov (United States)

    Gholoum, M.; Bruce, D.; Hazeam, S. Al

    2012-07-01

    A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine the quality of the

  8. IMAGE FUSION APPLIED TO SATELLITE IMAGERY FOR THE IMPROVED MAPPING AND MONITORING OF CORAL REEFS: A PROPOSAL

    Directory of Open Access Journals (Sweden)

    M. Gholoum

    2012-07-01

    Full Text Available A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine

  9. Accuracy verification of PET-CT image fusion and its utilization in target delineation of radiotherapy

    International Nuclear Information System (INIS)

    Wang Xuetao; Yu Jinming; Yang Guoren; Gong Heyi

    2005-01-01

    Objective: Evaluate the accuracy of co-registration of PET and CT (PET-CT) images on line with phantom, and utilize it on patients to provide clinical evidence for target delineation in radiotherapy. Methods: A phantom with markers and different volume cylinders was infused with various concentrations of 18 FDG, and scanned at 4 mm by PET and CT respectively. After having been transmitted into GE eNTEGRA and treatment planning system (TPS) workstations, the images were fused and reconstructed. The distance between the markers and the errors were monitored in PET and CT images respectively. The volume of cylinder in PET and CT images were measured and compared by certain pixel value proportion deduction method. The same procedure was performed on the pulmonary tumor image in ten patients. Results: eNTEGRA and TPS workstations had a good length linearity, but the fusion error of the latter was markedly greater than the former. Tumors in different volume filled by varying concentrations of 18 FDG required different pixel deduction proportion. The cylinder volume of PET and CT images were almost the same, so were the images of pulmonary tumor of ten patients. Conclusions: The accuracy of image co-registration of PET-CT on line may fulfill the clinical demand. Pixel value proportion deduction method can be used for target delineation on PET image. (authors)

  10. Nuclear structure and heavy-ion fusion

    International Nuclear Information System (INIS)

    Stokstad, R.G.

    1980-10-01

    A series of lectures is presented on experimental studies of heavy-ion fusion reactions with emphasis on the role of nuclear structure in the fusion mechanism. The experiments considered are of three types: the fusion of lighter heavy ions at subcoulomb energies is studied with in-beam γ-ray techniques; the subbarrier fusion of 16 O and 40 Ar with the isotopes of samarium is detected out of beam by x-radiation from delayed activity; and measurements at very high energies, again for the lighter ions, employ direct particle identification of evaporation residues. The experimental data are compared with predictions based on the fusion of two spheres with the only degree of freedom being the separation of the centers, and which interact via potentials that vary smoothly with changes in the mass and charge of the projectile and target. The data exhibit with the isotopes of samarium, a portion of these deviations can be understood in terms of the changing deformation of the target nucleus, but an additional degree of freedom such as neck formation appears necessary. The results on 10 B + 16 O and 12 C + 14 N → 26 Al at high bombarding energies indicate a maximum limiting angular momentum characteristic of the compound nucleus. At lower energies the nuclear structure of the colliding ion seems to affect strongly the cross section for fusion. Measurements made at subbarrier energies for a variety of projectile-target combinations in the 1p and 2s - 1d shell also indicate that the valence nucleons can affect the energy dependence for fusion. About half the systems studied so far have structureless excitation functions which follow a standard prediction. The other half exhibit large variations from this prediction. The possible importance of neutron transfer is discussed. The two-center shell model appears as a promising approach for gaining a qualitative understanding of these phenomena. 95 references, 52 figures, 1 table

  11. Application of principal component analysis and information fusion technique to detect hotspots in NOAA/AVHRR images of Jharia coalfield, India - article no. 013523

    Energy Technology Data Exchange (ETDEWEB)

    Gautam, R.S.; Singh, D.; Mittal, A. [Indian Institute of Technology Roorkee, Roorkee (India)

    2007-07-01

    Present paper proposes an algorithm for hotspot (sub-surface fire) detection in NOAA/AVHRR images in Jharia region of India by employing Principal Component Analysis (PCA) and fusion technique. Proposed technique is very simple to implement and is more adaptive in comparison to thresholding, multi-thresholding and contextual algorithms. The algorithm takes into account the information of AVHRR channels 1, 2, 3, 4 and vegetation indices NDVI and MSAVI for the required purpose. Proposed technique consists of three steps: (1) detection and removal of cloud and water pixels from preprocessed AVHRR image and screening out the noise of channel 3, (2) application of PCA on multi-channel information along with vegetation index information of NOAA/AVHRR image to obtain principal components, and (3) fusion of information obtained from principal component 1 and 2 to classify image pixels as hotspots. Image processing techniques are applied to fuse information in first two principal component images and no absolute threshold is incorporated to specify whether particular pixel belongs to hotspot class or not, hence, proposed method is adaptive in nature and works successfully for most of the AVHRR images with average 87.27% detection accuracy and 0.201% false alarm rate while comparing with ground truth points in Jharia region of India.

  12. Preliminary study on evaluation of the pancreatic tail observable limit of transabdominal ultrasonography using a position sensor and CT-fusion image

    Energy Technology Data Exchange (ETDEWEB)

    Sumi, Hajime; Itoh, Akihiro; Kawashima, Hiroki [Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya (Japan); Ohno, Eizaburo [Department of Endoscopy, Nagoya University Hospital, Nagoya (Japan); Itoh, Yuya; Nakamura, Yosuke; Hiramatsu, Takeshi; Sugimoto, Hiroyuki; Hayashi, Daijuro; Kuwahara, Takamichi; Morishima, Tomomasa; Kawai, Manabu; Furukawa, Kazuhiro [Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya (Japan); Funasaka, Kohei [Department of Endoscopy, Nagoya University Hospital, Nagoya (Japan); Nakamura, Masanao; Miyahara, Ryoji [Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya (Japan); Katano, Yoshiaki [Department of Gastroenterology, Second Teaching Hospital, Fujita Health University (Japan); Ishigami, Masatoshi [Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya (Japan); Ohmiya, Naoki [Department of Gastroenterology, Second Teaching Hospital, Fujita Health University (Japan); Goto, Hidemi [Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Nagoya (Japan); Department of Endoscopy, Nagoya University Hospital, Nagoya (Japan); and others

    2014-08-15

    Background and aim: Transabdominal ultrasonography (US) is commonly used for the initial screening of bilio-pancreatic diseases in Asian countries due to its widespread availability, the non-invasiveness and the cost-effectiveness. However, it is considered that US has limits to observe the area, namely the blind area. The observation of the pancreatic tail is particularly difficult. The goal of this study was to examine the pancreatic tail region that cannot be visualized on transverse scanning of the upper abdomen using US with spatial positional information and factors related to visualization, and observation of the tail from the splenic hilum. Methods: Thirty-nine patients with pancreatic/biliary tract disease underwent CT and US with GPS-like technology and fusion imaging for measurement of the real pancreatic length and the predicted/real unobservable (PU and RU) length of the pancreatic tail. RU from US on transverse scanning and the real pancreatic length were used to determine the unobservable area (UA: RU/the real pancreatic length). Relationships of RU with physical and hematological variables that might influence visualization of the pancreatic tail were investigated. Results: The real pancreatic length was 160.9 ± 16.4 mm, RU was 41.0 ± 17.8 mm, and UA was 25.3 ± 10.4%. RU was correlated with BMI (R = 0.446, P = 0.004) and waist circumferences (R = 0.354, P = 0.027), and strongly correlated with PU (R = 0.788, P < 0.001). The pancreatic tail was visible from the splenic hilum in 22 (56%) subjects and was completely identified in 13 (33%) subjects. Conclusions: Combined GPS-like technology with fusion imaging was useful for the objective estimation of the pancreatic blind area.

  13. Preliminary study on evaluation of the pancreatic tail observable limit of transabdominal ultrasonography using a position sensor and CT-fusion image

    International Nuclear Information System (INIS)

    Sumi, Hajime; Itoh, Akihiro; Kawashima, Hiroki; Ohno, Eizaburo; Itoh, Yuya; Nakamura, Yosuke; Hiramatsu, Takeshi; Sugimoto, Hiroyuki; Hayashi, Daijuro; Kuwahara, Takamichi; Morishima, Tomomasa; Kawai, Manabu; Furukawa, Kazuhiro; Funasaka, Kohei; Nakamura, Masanao; Miyahara, Ryoji; Katano, Yoshiaki; Ishigami, Masatoshi; Ohmiya, Naoki; Goto, Hidemi

    2014-01-01

    Background and aim: Transabdominal ultrasonography (US) is commonly used for the initial screening of bilio-pancreatic diseases in Asian countries due to its widespread availability, the non-invasiveness and the cost-effectiveness. However, it is considered that US has limits to observe the area, namely the blind area. The observation of the pancreatic tail is particularly difficult. The goal of this study was to examine the pancreatic tail region that cannot be visualized on transverse scanning of the upper abdomen using US with spatial positional information and factors related to visualization, and observation of the tail from the splenic hilum. Methods: Thirty-nine patients with pancreatic/biliary tract disease underwent CT and US with GPS-like technology and fusion imaging for measurement of the real pancreatic length and the predicted/real unobservable (PU and RU) length of the pancreatic tail. RU from US on transverse scanning and the real pancreatic length were used to determine the unobservable area (UA: RU/the real pancreatic length). Relationships of RU with physical and hematological variables that might influence visualization of the pancreatic tail were investigated. Results: The real pancreatic length was 160.9 ± 16.4 mm, RU was 41.0 ± 17.8 mm, and UA was 25.3 ± 10.4%. RU was correlated with BMI (R = 0.446, P = 0.004) and waist circumferences (R = 0.354, P = 0.027), and strongly correlated with PU (R = 0.788, P < 0.001). The pancreatic tail was visible from the splenic hilum in 22 (56%) subjects and was completely identified in 13 (33%) subjects. Conclusions: Combined GPS-like technology with fusion imaging was useful for the objective estimation of the pancreatic blind area

  14. Imaging Expression of Cytosine Deaminase-Herpes Virus Thymidine Kinase Fusion Gene (CD/TK Expression with [124I]FIAU and PET

    Directory of Open Access Journals (Sweden)

    Trevor Hackman

    2002-01-01

    Full Text Available Double prodrug activation gene therapy using the Escherichia coli cytosine deaminase (CDherpes simplex virus type 1 thymidine kinase (HSV1-tk fusion gene (CD/TK with 5-fluorocytosine (5FC, ganciclovir (GCV, and radiotherapy is currently under evaluation for treatment of different tumors. We assessed the efficacy of noninvasive imaging with [124I]FIAU (2′-fluoro-2′-deoxy-1-β-d-arabinofuranosyl-5-iodo-uracil and positron emission tomography (PET for monitoring expression of the CD/TK fusion gene. Walker-256 tumor cells were transduced with a retroviral vector bearing the CD/TK gene (W256CD/TK cells. The activity of HSV1-TK and CD subunits of the CD/TK gene product was assessed in different single cell-derived clones of W256CD/TK cells using the FIAU radiotracer accumulation assay in cells and a CD enzyme assay in cell homogenates, respectively. A linear relationship was observed between the levels of CD and HSV1-tk subunit expression in corresponding clones in vitro over a wide range of CD/TK expression levels. Several clones of W256CD/TK cells with significantly different levels of CD/TK expression were selected and used to produce multiple subcutaneous tumors in rats. PET imaging of HSV1-TK subunit activity with [124I]FIAU was performed on these animals and demonstrated that different levels of CD/TK expression in subcutaneous W256CD/TK tumors can be imaged quantitatively. CD expression in subcutaneous tumor sample homogenates was measured using a CD enzyme assay. A comparison of CD and HSV1-TK subunit enzymatic activity of the CD/TK fusion protein in vivo showed a significant correlation. Knowing this relationship, the parametric images of CD subunit activity were generated. Imaging with [124I]FIAU and PET could provide pre- and posttreatment assessments of CD/TK-based double prodrug activation in clinical gene therapy trials.

  15. A pin diode x-ray camera for laser fusion diagnostic imaging: Final technical report

    International Nuclear Information System (INIS)

    Jernigan, J.G.

    1987-01-01

    An x-ray camera has been constructed and tested for diagnostic imaging of laser fusion targets at the Laboratory for Laser Energetics (LLE) of the University of Rochester. The imaging detector, developed by the Hughes Aircraft Company, is a germanium PIN diode array of 10 x 64 separate elements which are bump bonded to a silicon readout chip containing a separate low noise amplifier for each pixel element. The camera assembly consists of a pinhole alignment mechanism, liquid nitrogen cryostat with detector mount and a thin beryllium entrance window, and a shielded rack containing the analog and digital electronics for operations. This x-ray camera has been tested on the OMEGA laser target chamber, the primary laser target facility of LLE, and operated via an Ethernet link to a SUN Microsystems workstation. X-ray images of laser targets are presented. The successful operation of this particular x-ray camera is a demonstration of the viability of the hybrid detector technology for future imaging and spectroscopic applications. This work was funded by the Department of Energy (DOE) as a project of the National Laser Users Facility (NLUF)

  16. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  17. Changes in the Oswestry Disability Index that predict improvement after lumbar fusion.

    Science.gov (United States)

    Djurasovic, Mladen; Glassman, Steven D; Dimar, John R; Crawford, Charles H; Bratcher, Kelly R; Carreon, Leah Y

    2012-11-01

    Clinical studies use both disease-specific and generic health outcomes measures. Disease-specific measures focus on health domains most relevant to the clinical population, while generic measures assess overall health-related quality of life. There is little information about which domains of the Oswestry Disability Index (ODI) are most important in determining improvement in overall health-related quality of life, as measured by the 36-Item Short Form Health Survey (SF-36), after lumbar spinal fusion. The objective of the study is to determine which clinical elements assessed by the ODI most influence improvement of overall health-related quality of life. A single tertiary spine center database was used to identify patients undergoing lumbar fusion for standard degenerative indications. Patients with complete preoperative and 2-year outcomes measures were included. Pearson correlation was used to assess the relationship between improvement in each item of the ODI with improvement in the SF-36 physical component summary (PCS) score, as well as achievement of the SF-36 PCS minimum clinically important difference (MCID). Multivariate regression modeling was used to examine which items of the ODI best predicted achievement for the SF-36 PCS MCID. The effect size and standardized response mean were calculated for each of the items of the ODI. A total of 1104 patients met inclusion criteria (674 female and 430 male patients). The mean age at surgery was 57 years. All items of the ODI showed significant correlations with the change in SF-36 PCS score and achievement of MCID for the SF-36 PCS, but only pain intensity, walking, and social life had r values > 0.4 reflecting moderate correlation. These 3 variables were also the dimensions that were independent predictors of the SF-36 PCS, and they were the only dimensions that had effect sizes and standardized response means that were moderate to large. Of the health dimensions measured by the ODI, pain intensity, walking

  18. BP fusion model for the detection of oil spills on the sea by remote sensing

    Science.gov (United States)

    Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin

    2003-06-01

    Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.

  19. Fusion Simulation Project Workshop Report

    Science.gov (United States)

    Kritz, Arnold; Keyes, David

    2009-03-01

    The mission of the Fusion Simulation Project is to develop a predictive capability for the integrated modeling of magnetically confined plasmas. This FSP report adds to the previous activities that defined an approach to integrated modeling in magnetic fusion. These previous activities included a Fusion Energy Sciences Advisory Committee panel that was charged to study integrated simulation in 2002. The report of that panel [Journal of Fusion Energy 20, 135 (2001)] recommended the prompt initiation of a Fusion Simulation Project. In 2003, the Office of Fusion Energy Sciences formed a steering committee that developed a project vision, roadmap, and governance concepts [Journal of Fusion Energy 23, 1 (2004)]. The current FSP planning effort involved 46 physicists, applied mathematicians and computer scientists, from 21 institutions, formed into four panels and a coordinating committee. These panels were constituted to consider: Status of Physics Components, Required Computational and Applied Mathematics Tools, Integration and Management of Code Components, and Project Structure and Management. The ideas, reported here, are the products of these panels, working together over several months and culminating in a 3-day workshop in May 2007.

  20. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    Science.gov (United States)

    Gengler, Sarah; Bogaert, Patrick

    2014-12-01

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.

  1. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    International Nuclear Information System (INIS)

    Gengler, Sarah; Bogaert, Patrick

    2014-01-01

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression

  2. Image fusion of MRI and fMRI with intraoperative MRI data: methods and clinical relevance for neurosurgical interventions; Fusion von MRT-, fMRT- und intraoperativen MRT-Daten. Methode und klinische Bedeutung am Beispiel neurochirurgischer Interventionen

    Energy Technology Data Exchange (ETDEWEB)

    Moche, M.; Busse, H.; Dannenberg, C.; Schulz, T.; Schmidt, F.; Kahn, T. [Universitaetsklinikum Leipzig (Germany). Klinik und Poliklinik fuer Diagnostische Radiologie; Schmitgen, A. [GMD Forschungszentrum Informationstechnik GmbH-FIT, Sankt Augustin (Germany); Trantakis, C.; Winkler, D. [Klinik und Poliklinik fuer Neurochirurgie, Universitaetsklinikum Leipzig (Germany)

    2001-11-01

    The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion - requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions. (orig.) [German] Ziel dieser Arbeit waren die Realisierung und klinische Bewertung einer Bildfusion praeoperativer MRT- und fMRT-Bilder mit intraoperativen Datensaetzen eines interventionellen MRT-Systems am Beispiel neurochirurgischer Eingriffe. Ein vertikal offenes 0,5-T-MRT-System wurde mit einem erweiterten Navigationssystem ausgestattet, welches eine Integration zusaetzlicher Bildinformationen (Hochfeld-MRT, fMRT, CT) in die intraoperativ akquirierten Datensaetze erlaubt. Diese fusionierten Bilddaten wurden zur Interventionsplanung und multimodalen Navigation verwendet. Bisher wurde das System bei insgesamt 70 neurochirurgischen Eingriffen eingesetzt, davon 13

  3. Fundamental radiation effects studies in the fusion materials program

    International Nuclear Information System (INIS)

    Doran, D.G.

    1982-01-01

    Fundamental radiation effects studies in the US Fusion Materials Program generally fall under the aegis of the Damage Analysis and Fundamental Studies (DAFS) Program. In a narrow sense, the problem addressed by the DAFS program is the prediction of radiation effects in fusion devices using data obtained in non-representative environments. From the onset, the program has had near-term and long-term components. The premise for the latter is that there will be large economic penalties for uncertainties in predictive capability. Fusion devices are expected to be large and complex and unanticipated maintenance will be costly. It is important that predictions are based on a maximum of understanding and a minimum of empiricism. Gaining this understanding is the thrust of the long-term component. (orig.)

  4. Three-Dimensional Image Fusion of 18F-Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography and Contrast-Enhanced Computed Tomography for Computer-Assisted Planning of Maxillectomy of Recurrent Maxillary Squamous Cell Carcinoma and Defect Reconstruction.

    Science.gov (United States)

    Yu, Yao; Zhang, Wen-Bo; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin

    2017-06-01

    The purpose of this study was to describe new technology assisted by 3-dimensional (3D) image fusion of 18 F-fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) and contrast-enhanced CT (CECT) for computer planning of a maxillectomy of recurrent maxillary squamous cell carcinoma and defect reconstruction. Treatment of recurrent maxillary squamous cell carcinoma usually includes tumor resection and free flap reconstruction. FDG-PET/CT provided images of regions of abnormal glucose uptake and thus showed metabolic tumor volume to guide tumor resection. CECT data were used to create 3D reconstructed images of vessels to show the vascular diameters and locations, so that the most suitable vein and artery could be selected during anastomosis of the free flap. The data from preoperative maxillofacial CECT scans and FDG-PET/CT imaging were imported into the navigation system (iPlan 3.0; Brainlab, Feldkirchen, Germany). Three-dimensional image fusion between FDG-PET/CT and CECT was accomplished using Brainlab software according to the position of the 2 skulls simulated in the CECT image and PET/CT image, respectively. After verification of the image fusion accuracy, the 3D reconstruction images of the metabolic tumor, vessels, and other critical structures could be visualized within the same coordinate system. These sagittal, coronal, axial, and 3D reconstruction images were used to determine the virtual osteotomy sites and reconstruction plan, which was provided to the surgeon and used for surgical navigation. The average shift of the 3D image fusion between FDG-PET/CT and CECT was less than 1 mm. This technique, by clearly showing the metabolic tumor volume and the most suitable vessels for anastomosis, facilitated resection and reconstruction of recurrent maxillary squamous cell carcinoma. We used 3D image fusion of FDG-PET/CT and CECT to successfully accomplish resection and reconstruction of recurrent maxillary squamous cell carcinoma

  5. Low-Resolution Tactile Image Recognition for Automated Robotic Assembly Using Kernel PCA-Based Feature Fusion and Multiple Kernel Learning-Based Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-01-01

    Full Text Available In this paper, we propose a robust tactile sensing image recognition scheme for automatic robotic assembly. First, an image reprocessing procedure is designed to enhance the contrast of the tactile image. In the second layer, geometric features and Fourier descriptors are extracted from the image. Then, kernel principal component analysis (kernel PCA is applied to transform the features into ones with better discriminating ability, which is the kernel PCA-based feature fusion. The transformed features are fed into the third layer for classification. In this paper, we design a classifier by combining the multiple kernel learning (MKL algorithm and support vector machine (SVM. We also design and implement a tactile sensing array consisting of 10-by-10 sensing elements. Experimental results, carried out on real tactile images acquired by the designed tactile sensing array, show that the kernel PCA-based feature fusion can significantly improve the discriminating performance of the geometric features and Fourier descriptors. Also, the designed MKL-SVM outperforms the regular SVM in terms of recognition accuracy. The proposed recognition scheme is able to achieve a high recognition rate of over 85% for the classification of 12 commonly used metal parts in industrial applications.

  6. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    Science.gov (United States)

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  7. Short fusion

    CERN Multimedia

    2002-01-01

    French and UK researchers are perfecting a particle accelerator technique that could aid the quest for fusion energy or make X-rays that are safer and produce higher-resolution images. Led by Dr Victor Malka from the Ecole Nationale Superieure des Techniques Avancees in Paris, the team has developed a better way of accelerating electrons over short distances (1 page).

  8. Nuclear Fusion prize laudation Nuclear Fusion prize laudation

    Science.gov (United States)

    Burkart, W.

    2011-01-01

    Clean energy in abundance will be of critical importance to the pursuit of world peace and development. As part of the IAEA's activities to facilitate the dissemination of fusion related science and technology, the journal Nuclear Fusion is intended to contribute to the realization of such energy from fusion. In 2010, we celebrated the 50th anniversary of the IAEA journal. The excellence of research published in the journal is attested to by its high citation index. The IAEA recognizes excellence by means of an annual prize awarded to the authors of papers judged to have made the greatest impact. On the occasion of the 2010 IAEA Fusion Energy Conference in Daejeon, Republic of Korea at the welcome dinner hosted by the city of Daejeon, we celebrated the achievements of the 2009 and 2010 Nuclear Fusion prize winners. Steve Sabbagh, from the Department of Applied Physics and Applied Mathematics, Columbia University, New York is the winner of the 2009 award for his paper: 'Resistive wall stabilized operation in rotating high beta NSTX plasmas' [1]. This is a landmark paper which reports record parameters of beta in a large spherical torus plasma and presents a thorough investigation of the physics of resistive wall mode (RWM) instability. The paper makes a significant contribution to the critical topic of RWM stabilization. John Rice, from the Plasma Science and Fusion Center, MIT, Cambridge is the winner of the 2010 award for his paper: 'Inter-machine comparison of intrinsic toroidal rotation in tokamaks' [2]. The 2010 award is for a seminal paper that analyzes results across a range of machines in order to develop a universal scaling that can be used to predict intrinsic rotation. This paper has already triggered a wealth of experimental and theoretical work. I congratulate both authors and their colleagues on these exceptional papers. W. Burkart Deputy Director General Department of Nuclear Sciences and Applications International Atomic Energy Agency, Vienna

  9. Fusion neutronics experiments and analysis

    International Nuclear Information System (INIS)

    1992-01-01

    UCLA has led the neutronics R ampersand D effort in the US for the past several years through the well-established USDOE/JAERI Collaborative Program on Fusion Neutronics. Significant contributions have been made in providing solid bases for advancing the neutronics testing capabilities in fusion reactors. This resulted from the hands-on experience gained from conducting several fusion integral experiments to quantify the prediction uncertainties of key blanket design parameters such as tritium production rate, activation, and nuclear heating, and when possible, to narrow the gap between calculational results and measurements through improving nuclear data base and codes capabilities. The current focus is to conduct the experiments in an annular configuration where the test assembly totally surrounds a simulated line source. The simulated line source is the first-of-a-kind in the scope of fusion integral experiments and presents a significant contribution to the world of fusion neutronics. The experiments proceeded through Phase IIIA to Phase IIIC in these line source simulation experiments started in 1989

  10. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    Science.gov (United States)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  11. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    Science.gov (United States)

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  12. Randomized Symmetric Crypto Spatial Fusion Steganographic System

    Directory of Open Access Journals (Sweden)

    Viswanathan Perumal

    2016-06-01

    Full Text Available The image fusion steganographic system embeds encrypted messages in decomposed multimedia carriers using a pseudorandom generator but it fails to evaluate the contents of the cover image. This results in the secret data being embedded in smooth regions, which leads to visible distortion that affects the imperceptibility and confidentiality. To solve this issue, as well as to improve the quality and robustness of the system, the Randomized Symmetric Crypto Spatial Fusion Steganography System is proposed in this study. It comprises three-subsystem bitwise encryption, spatial fusion, and bitwise embedding. First, bitwise encryption encrypts the message using bitwise operation to improve the confidentiality. Then, spatial fusion decomposes and evaluates the region of embedding on the basis of sharp intensity and capacity. This restricts the visibility of distortion and provides a high embedding capacity. Finally, the bitwise embedding system embeds the encrypted message through differencing the pixels in the region by 1, checking even or odd options and not equal to zero constraints. This reduces the modification rate to avoid distortion. The proposed heuristic algorithm is implemented in the blue channel, to which the human visual system is less sensitive. It was tested using standard IST natural images with steganalysis algorithms and resulted in better quality, imperceptibility, embedding capacity and invulnerability to various attacks compared to other steganographic systems.

  13. Review of Fusion Systems and Contributing Technologies for SIHS-TD (Examen des Systemes de Fusion et des Technologies d'Appui pour la DT SIHS)

    National Research Council Canada - National Science Library

    Angel, Harry H; Ste-Croix, Chris; Kittel, Elizabeth

    2007-01-01

    The major objectives of the report were to identify and review the field of image fusion and contributing technologies and to recommend systems, algorithms and metrics for the proposed SIHS TD Vision SST fusion test bed...

  14. Applications of technical fusion in uroradiology; Einsatzmoeglichkeiten der technischen Fusion in der Uroradiologie

    Energy Technology Data Exchange (ETDEWEB)

    Aigner, F.; Zordo, T. de; Junker, D. [Medical University Innsbruck (Austria). Radiology; Pallwein-Prettner, L. [Sisters of Charity Hospital, Linz (Austria). Radiology

    2015-05-15

    Technical fusion is defined as the ultrasound-guided navigation through a previously generated 3 D imaging dataset such as a computed tomography (CT) or magnetic resonance imaging (MRI). This technique allows for moving the fused CT/MRI datasets synchroneously with the real-time ultrasound in the same plane. Established and furthermore not yet described applications, the technical principles and the limitations of this promising technique will be introduced.

  15. Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR

    Science.gov (United States)

    Sidorchuk, D.; Volkov, V.; Gladilin, S.

    2018-04-01

    This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.

  16. The influence of asymmetry on mix in direct-drive inertial confinement fusion experiments

    International Nuclear Information System (INIS)

    Christensen, C.R.; Wilson, D.C.; Barnes, Cris W.; Grim, G.P.; Morgan, G.L.; Wilke, M.D.; Marshall, F.J.; Glebov, V.Yu.; Stoeckl, C.

    2004-01-01

    The mix of shell material into the fuel of inertial confinement fusion (ICF) implosions is thought to be a major cause of the failure of most ICF experiments to achieve the fusion yield predicted by computer codes. Implosion asymmetry is a simple measurable quantity that is expected to affect the mix. In order to measure the coupling of asymmetry to mix in ICF implosions, we have performed experiments on the OMEGA laser [T. R. Boehly et al., Rev. Sci. Instrum. 66, 508 (1995)] that vary the energy of each of the sixty beams individually to achieve a given fraction of L2, the second-order Legendre polynomial. Prolate, symmetric, and oblate implosions resulted. Three different fill pressures were used. Simultaneous x-ray and neutron images were obtained. The experiments were modeled with a radiation/hydrodynamics code using the multi-fluid interpenetration mix model of Scannapieco and Cheng. It fits the data well with a single value of its one adjustable parameter (0.07±0.01). This agreement is demonstrated by neutron yield, x-ray images, neutron images, and ion temperatures. The degree of decline of the neutron yield with asymmetry at different fill pressures provides a hard constraint on ICF mix modeling

  17. SU-F-T-42: MRI and TRUS Image Fusion as a Mode of Generating More Accurate Prostate Contours

    Energy Technology Data Exchange (ETDEWEB)

    Petronek, M; Purysko, A; Balik, S; Ciezki, J; Klein, E; Wilkinson, D [Cleveland Clinic Foundation, Cleveland, OH (United States)

    2016-06-15

    Purpose: Transrectal Ultrasound (TRUS) imaging is utilized intra-operatively for LDR permanent prostate seed implant treatment planning. Prostate contouring with TRUS can be challenging at the apex and base. This study attempts to improve accuracy of prostate contouring with MRI-TRUS fusion to prevent over- or under-estimation of the prostate volume. Methods: 14 patients with previous MRI guided prostate biopsy and undergone an LDR permanent prostate seed implant have been selected. The prostate was contoured on the MRI images (1 mm slice thickness) by a radiologist. The prostate was also contoured on TRUS images (5 mm slice thickness) during LDR procedure by a urologist. MRI and TRUS images were rigidly fused manually and the prostate contours from MRI and TRUS were compared using Dice similarity coefficient, percentage volume difference and length, height and width differences. Results: The prostate volume was overestimated by 8 ± 18% (range: 34% to −25%) in TRUS images compared to MRI. The mean Dice was 0.77 ± 0.09 (range: 0.53 to 0.88). The mean difference (TRUS-MRI) in the prostate width was 0 ± 4 mm (range: −11 to 5 mm), height was −3 ± 6 mm (range: −13 to 6 mm) and length was 6 ± 6 (range: −10 to 16 mm). Prostate was overestimated with TRUS imaging at the base for 6 cases (mean: 8 ± 4 mm and range: 5 to 14 mm), at the apex for 6 cases (mean: 11 ± 3 mm and range: 5 to 15 mm) and 1 case was underestimated at both base and apex by 4 mm. Conclusion: Use of intra-operative TRUS and MRI image fusion can help to improve the accuracy of prostate contouring by accurately accounting for prostate over- or under-estimations, especially at the base and apex. The mean amount of discrepancy is within a range that is significant for LDR sources.

  18. SU-F-T-42: MRI and TRUS Image Fusion as a Mode of Generating More Accurate Prostate Contours

    International Nuclear Information System (INIS)

    Petronek, M; Purysko, A; Balik, S; Ciezki, J; Klein, E; Wilkinson, D

    2016-01-01

    Purpose: Transrectal Ultrasound (TRUS) imaging is utilized intra-operatively for LDR permanent prostate seed implant treatment planning. Prostate contouring with TRUS can be challenging at the apex and base. This study attempts to improve accuracy of prostate contouring with MRI-TRUS fusion to prevent over- or under-estimation of the prostate volume. Methods: 14 patients with previous MRI guided prostate biopsy and undergone an LDR permanent prostate seed implant have been selected. The prostate was contoured on the MRI images (1 mm slice thickness) by a radiologist. The prostate was also contoured on TRUS images (5 mm slice thickness) during LDR procedure by a urologist. MRI and TRUS images were rigidly fused manually and the prostate contours from MRI and TRUS were compared using Dice similarity coefficient, percentage volume difference and length, height and width differences. Results: The prostate volume was overestimated by 8 ± 18% (range: 34% to −25%) in TRUS images compared to MRI. The mean Dice was 0.77 ± 0.09 (range: 0.53 to 0.88). The mean difference (TRUS-MRI) in the prostate width was 0 ± 4 mm (range: −11 to 5 mm), height was −3 ± 6 mm (range: −13 to 6 mm) and length was 6 ± 6 (range: −10 to 16 mm). Prostate was overestimated with TRUS imaging at the base for 6 cases (mean: 8 ± 4 mm and range: 5 to 14 mm), at the apex for 6 cases (mean: 11 ± 3 mm and range: 5 to 15 mm) and 1 case was underestimated at both base and apex by 4 mm. Conclusion: Use of intra-operative TRUS and MRI image fusion can help to improve the accuracy of prostate contouring by accurately accounting for prostate over- or under-estimations, especially at the base and apex. The mean amount of discrepancy is within a range that is significant for LDR sources.

  19. The need for fusion

    International Nuclear Information System (INIS)

    Llewellyn Smith, Chris

    2005-01-01

    World energy use is predicted to double in the next 40 years. Currently 80% is provided by burning fossil fuels, but this is not sustainable indefinitely because (i) it is driving climate change, and (ii) fossil fuels will eventually be exhausted (starting with oil). The resulting potential energy crisis requires increased investment in energy research and development (which is currently very small on the scale of the $3 trillion p.a. energy market, and falling). The wide portfolio of energy work that should be supported must include fusion, which is one of the very few options that are capable in principle of supplying a large fraction of need. The case for fusion has been strengthened by recent advances in plasma physics and fusion technology that are reflected in the forthcoming European Fusion Power Plant Conceptual Study, which addresses safety and cost issues. The big questions are - How can we deliver fusion power as fast as possible? How long is it likely to take? I argue for a fast track programme, and describe a fast-track model developed at Culham, which is intended to stimulate debate on the way ahead and the resources that are needed

  20. Sensor data fusion to predict multiple soil properties

    NARCIS (Netherlands)

    Mahmood, H.S.; Hoogmoed, W.B.; Henten, van E.J.

    2012-01-01

    The accuracy of a single sensor is often low because all proximal soil sensors respond to more than one soil property of interest. Sensor data fusion can potentially overcome this inability of a single sensor and can best extract useful and complementary information from multiple sensors or sources.

  1. Splenogonadal Fusion

    Directory of Open Access Journals (Sweden)

    Sung-Lang Chen

    2008-11-01

    Full Text Available Splenogonadal fusion (SGF is a rare congenital non-malignant anomaly characterized by fusion of splenic tissue to the gonad, and can be continuous or discontinuous. Very few cases have been diagnosed preoperatively, and many patients who present with testicular swelling undergo unnecessary orchiectomy under the suspicion of testicular neoplasm. A 16-year-old boy presented with a left scrotal mass and underwent total excision of a 1.6-cm tumor without damaging the testis, epididymis or its accompanying vessels. Pathologic examination revealed SFG (discontinuous type. If clinically suspected before surgery, the diagnosis may be confirmed by Tc-99m sulfur colloid imaging, which shows uptake in both the spleen and accessory splenic tissue within the scrotum. Frozen section should be considered if there remains any doubt regarding the diagnosis during operation.

  2. Infrared and visible fusion face recognition based on NSCT domain

    Science.gov (United States)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  3. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  4. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  5. Clinicopathological and prognostic relevance of uptake level using 18F-fluorodeoxyglucose positron emission tomography/computed tomography fusion imaging (18F-FDG PET/CT) in primary breast cancer

    International Nuclear Information System (INIS)

    Ueda, Shigeto; Tsuda, Hitoshi; Asakawa, Hideki

    2008-01-01

    Using integrated 18 F-fluorodeoxyglucose positron emission tomography/computed tomography fusion imaging ( 18 F-FDG PET/CT), the clinical significance of 18 F-FDG uptake was evaluated in patients with primary breast cancer. Clinicopathological correlation with the level of maximum standardized uptake values (SUV) 60 min obtained from preoperative 18 F-FDG PET/CT were examined in 152 patients with primary breast cancer. The prognostic impact of the level of SUV was explored using simulated prognosis derived from computed program Adjuvant! in 136 (89%) patients with invasive ductal carcinoma (IDC). High SUV level was significantly correlated with tumor invasive size (≤2 cm) (P 18 F-FDG would be predictive of poor prognosis in patients with primary breast cancer, and aggressive features of cancer cells in patients with early breast cancer. 18 F-FDG PET/CT could be a useful tool to pretherapeutically predict biological characteristics and baseline risk of breast cancer. (author)

  6. Fusion Canada issue 29

    International Nuclear Information System (INIS)

    1995-10-01

    A short bulletin from the National Fusion Program highlighting in this issue Canada-Europe Accords: 5 year R and D collaboration for the International Thermonuclear Experimental Reactor (ITER) AECL is designated to arrange and implement the Memorandum of Understanding (MOU) and the ITER Engineering Design Activities (EDA) while EUROTAM is responsible for operating Europe's Fusion R and D programs plus MOU and EDA. The MOU includes tokamaks, plasma physics, fusion technology, fusion fuels and other approaches to fusion energy (as alternatives to tokamaks). STOR-M Tokamak was restarted at the University of Saskatchewan following upgrades to the plasma chamber to accommodate the Compact Toroid (CT) injector. The CT injector has a flexible attachment thus allowing for injection angle adjustments. Real-time video images of a single plasma discharge on TdeV showing that as the plasma density increases, in a linear ramp divertor, the plasma contact with the horizontal plate decreases while contact increases with the oblique plate. Damage-resistant diffractive optical elements (DOE) have been developed for Inertial Confinement Fusion (ICF) research by Gentac Inc. and the National Optics Institute, laser beam homogeniser and laser harmonic separator DOE can also be made using the same technology. Studies using TdeV indicate that a divertor will be able to pump helium from the tokamak with a detached-plasma divertor but helium extraction performance must first be improved, presently the deuterium:helium retention radio-indicates that in order to pump enough helium through a fusion reactor, too much deuterium-tritium fuel would be pumped out. 2 fig

  7. Testing a Modified PCA-Based Sharpening Approach for Image Fusion

    Directory of Open Access Journals (Sweden)

    Jan Jelének

    2016-09-01

    Full Text Available Image data sharpening is a challenging field of remote sensing science, which has become more relevant as high spatial-resolution satellites and superspectral sensors have emerged. Although the spectral property is crucial for mineral mapping, spatial resolution is also important as it allows targeted minerals/rocks to be identified/interpreted in a spatial context. Therefore, improving the spatial context, while keeping the spectral property provided by the superspectral sensor, would bring great benefits for geological/mineralogical mapping especially in arid environments. In this paper, a new concept was tested using superspectral data (ASTER and high spatial-resolution panchromatic data (WorldView-2 for image fusion. A modified Principal Component Analysis (PCA-based sharpening method, which implements a histogram matching workflow that takes into account the real distribution of values, was employed to test whether the substitution of Principal Components (PC1–PC4 can bring a fused image which is spectrally more accurate. The new approach was compared to those most widely used—PCA sharpening and Gram–Schmidt sharpening (GS, both available in ENVI software (Version 5.2 and lower as well as to the standard approach—sharpening Landsat 8 multispectral bands (MUL using its own panchromatic (PAN band. The visual assessment and the spectral quality indicators proved that the spectral performance of the proposed sharpening approach employing PC1 and PC2 improve the performance of the PCA algorithm, moreover, comparable or better results are achieved compared to the GS method. It was shown that, when using the PC1, the visible-near infrared (VNIR part of the spectrum was preserved better, however, if the PC2 was used, the short-wave infrared (SWIR part was preserved better. Furthermore, this approach improved the output spectral quality when fusing image data from different sensors (e.g., ASTER and WorldView-2 while keeping the proper albedo

  8. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  9. Risk factor analysis for predicting vertebral body re-collapse after posterior instrumented fusion in thoracolumbar burst fracture.

    Science.gov (United States)

    Jang, Hae-Dong; Bang, Chungwon; Lee, Jae Chul; Soh, Jae-Wan; Choi, Sung-Woo; Cho, Hyeung-Kyu; Shin, Byung-Joon

    2018-02-01

    In the posterior instrumented fusion surgery for thoracolumbar (T-L) burst fracture, early postoperative re-collapse of well-reduced vertebral body fracture could induce critical complications such as correction loss, posttraumatic kyphosis, and metal failure, often leading to revision surgery. Furthermore, re-collapse is quite difficult to predict because of the variety of risk factors, and no widely accepted accurate prediction systems exist. Although load-sharing classification has been known to help to decide the need for additional anterior column support, this radiographic scoring system has several critical limitations. (1) To evaluate risk factors and predictors for postoperative re-collapse in T-L burst fractures. (2) Through the decision-making model, we aimed to predict re-collapse and prevent unnecessary additional anterior spinal surgery. Retrospective comparative study. Two-hundred and eight (104 men and 104 women) consecutive patients with T-L burst fracture who underwent posterior instrumented fusion were reviewed retrospectively. Burst fractures caused by high-energy trauma (fall from a height and motor vehicle accident) with a minimum 1-year follow-up were included. The average age at the time of surgery was 45.9 years (range, 15-79). With respect to the involved spinal level, 95 cases (45.6%) involved L1, 51 involved T12, 54 involved L2, and 8 involved T11. Mean fixation segments were 3.5 (range, 2-5). Pedicle screw instrumentation including fractured vertebra had been performed in 129 patients (62.3%). Clinical data using self-report measures (visual analog scale score), radiographic measurements (plain radiograph, computed tomography, and magnetic resonance image), and functional measures using the Oswestry Disability Index were evaluated. Body height loss of fractured vertebra, body wedge angle, and Cobb angle were measured in serial plain radiographs. We assigned patients to the re-collapse group if their body height loss progressed greater

  10. Power-balance analysis of muon-catalyzed fusion-fission hybrid reactor systems

    International Nuclear Information System (INIS)

    Miller, R.L.; Krakowski, R.A.

    1985-01-01

    A power-balance model of a muon-catalyzed fusion system in the context of a fission-fuel factory is developed and exercised to predict the required physics performance of systems competitive with either pure muon-catalyzed fusion systems or thermonuclear fusion-fission fuel factory hybrid systems

  11. Imaging Characteristics in ALK Fusion-Positive Lung Adenocarcinomas by Using HRCT

    Science.gov (United States)

    Okumura, Sakae; Kuroda, Hiroaki; Uehara, Hirofumi; Mun, Mingyon; Takeuchi, Kengo; Nakagawa, Ken

    2014-01-01

    Objectives: We aimed to identify high-resolution computed tomography (HRCT) features useful to distinguish the anaplastic lymphoma kinase gene (ALK) fusion-positive and negative lung adenocarcinomas. Methods: We included 236 surgically resected adenocarcinoma lesions, which included 27 consecutive ALK fusion-positive (AP) lesions, 115 epidermal growth factor receptor mutation-positive lesions, and 94 double-negative lesions. HRCT parameters including size, air bronchograms, pleural indentation, spiculation, and tumor disappearance rate (TDR) were compared. In addition, prevalence of small lesions (≤20 mm) and solid lesions (TDR ≤20%) were compared. Results: AP lesions were significantly smaller and had lower TDR (%) than ALK fusion-negative (AN) lesions (tumor diameter: 20.7 mm ± 14.1 mm vs. 27.4 mm ± 13.8 mm, respectively, p 20 mm (n = 7, 25.9%) showed a solid pattern. Among all small lesions, AP lesions had lower TDR and more frequent spiculation than AN lesions (p 20 mm lesions may be ALK fusion-negative. PMID:24899136

  12. Wind power application research on the fusion of the determination and ensemble prediction

    Science.gov (United States)

    Lan, Shi; Lina, Xu; Yuzhu, Hao

    2017-07-01

    The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.

  13. Prediction of standard-dose brain PET image by using MRI and low-dose brain [{sup 18}F]FDG PET images

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Jiayin [School of Electronics Engineering, Huaihai Institute of Technology, Lianyungang, Jiangsu 222005, China and IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Gao, Yaozong [IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Shi, Feng [IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Lalush, David S. [Joint UNC-NCSU Department of Biomedical Engineering, North Carolina State University, Raleigh, North Carolina 27695 (United States); Lin, Weili [MRI Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Shen, Dinggang, E-mail: dgshen@med.unc.edu [IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713 (Korea, Republic of)

    2015-09-15

    Purpose: Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient’s exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. As yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain [{sup 18}F]FDG PET image by using a low-dose brain [{sup 18}F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. Methods: The authors employ a regression forest for predicting the standard-dose brain [{sup 18}F]FDG PET image by low-dose brain [{sup 18}F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain [{sup 18}F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. Results: The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain [{sup 18}F]FDG PET

  14. Prediction of standard-dose brain PET image by using MRI and low-dose brain ["1"8F]FDG PET images

    International Nuclear Information System (INIS)

    Kang, Jiayin; Gao, Yaozong; Shi, Feng; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2015-01-01

    Purpose: Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient’s exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. As yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain ["1"8F]FDG PET image by using a low-dose brain ["1"8F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. Methods: The authors employ a regression forest for predicting the standard-dose brain ["1"8F]FDG PET image by low-dose brain ["1"8F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain ["1"8F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. Results: The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain ["1"8F]FDG PET image and substantially

  15. Fusion of Selected Cells and Vesicles Mediated by Optically Trapped Plasmonic Nanoparticles

    DEFF Research Database (Denmark)

    Bahadori, Azra

    . In this work, we introduce a novel and extremely flexible physical method which can trigger membrane fusion in a highly selective manner not only between synthetic GUVs of different compositions, but also between live cells which remain viable after fusion. Optical tweezers’ laser (1064 nm) is used to position....... The concept of cellular delivery is also known as targeted drug delivery and is quite a hot research topic internationally. Therefore, there have been efforts to develop various chemical molecules, proteins/peptides and physical approaches to trigger membrane fusion between synthetic giant unilamellar...... and merging of the two membranes results in merging the two membranes thereby completes the fusion. Complete fusion is associated with lipid mixing and lumen mixing which are both imaged by a high resolution confocal microscope. The confocal imaging enables quantification of the associated lipid mixing...

  16. Feature level fusion of hand and face biometrics

    Science.gov (United States)

    Ross, Arun A.; Govindarajan, Rohin

    2005-03-01

    Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.

  17. Prediction of density limits in tokamaks: Theory, comparison with experiment, and application to the proposed Fusion Ignition Research Experiment

    International Nuclear Information System (INIS)

    Stacey, Weston M.

    2002-01-01

    A framework for the predictive calculation of density limits in future tokamaks is proposed. Theoretical models for different density limit phenomena are summarized, and the requirements for additional models are identified. These theoretical density limit models have been incorporated into a relatively simple, but phenomenologically comprehensive, integrated numerical calculation of the core, edge, and divertor plasmas and of the recycling neutrals, in order to obtain plasma parameters needed for the evaluation of the theoretical models. A comparison of these theoretical predictions with observed density limits in current experiments is summarized. A model for the calculation of edge pedestal parameters, which is needed in order to apply the density limit predictions to future tokamaks, is summarized. An application to predict the proximity to density limits and the edge pedestal parameters of the proposed Fusion Ignition Research Experiment is described

  18. The value of magnetic resonance imaging and ultrasonography (MRI/US)-fusion biopsy platforms in prostate cancer detection: a systematic review.

    Science.gov (United States)

    Gayet, Maudy; van der Aa, Anouk; Beerlage, Harrie P; Schrier, Bart Ph; Mulders, Peter F A; Wijkstra, Hessel

    2016-03-01

    Despite limitations considering the presence, staging and aggressiveness of prostate cancer, ultrasonography (US)-guided systematic biopsies (SBs) are still the 'gold standard' for the diagnosis of prostate cancer. Recently, promising results have been published for targeted prostate biopsies (TBs) using magnetic resonance imaging (MRI) and ultrasonography (MRI/US)-fusion platforms. Different platforms are USA Food and Drug Administration registered and have, mostly subjective, strengths and weaknesses. To our knowledge, no systematic review exists that objectively compares prostate cancer detection rates between the different platforms available. To assess the value of the different MRI/US-fusion platforms in prostate cancer detection, we compared platform-guided TB with SB, and other ways of MRI TB (cognitive fusion or in-bore MR fusion). We performed a systematic review of well-designed prospective randomised and non-randomised trials in the English language published between 1 January 2004 and 17 February 2015, using PubMed, Embase and Cochrane Library databases. Search terms included: 'prostate cancer', 'MR/ultrasound(US) fusion' and 'targeted biopsies'. Extraction of articles was performed by two authors (M.G. and A.A.) and were evaluated by the other authors. Randomised and non-randomised prospective clinical trials comparing TB using MRI/US-fusion platforms and SB, or other ways of TB (cognitive fusion or MR in-bore fusion) were included. In all, 11 of 1865 studies met the inclusion criteria, involving seven different fusion platforms and 2626 patients: 1119 biopsy naïve, 1433 with prior negative biopsy, 50 not mentioned (either biopsy naïve or with prior negative biopsy) and 24 on active surveillance (who were disregarded). The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool was used to assess the quality of included articles. No clear advantage of MRI/US fusion-guided TBs was seen for cancer detection rates (CDRs) of all prostate

  19. Reaction mechanisms in heavy ion fusion

    Directory of Open Access Journals (Sweden)

    Lubian J.

    2011-10-01

    Full Text Available We discuss the reaction mechanisms involved in heavy ion fusion. We begin with collisions of tightly bound systems, considering three energy regimes: energies above the Coulomb barrier, energies just below the barrier and deep sub-barrier energies. We show that channel coupling effects may influence the fusion process at above-barrier energies, increasing or reducing the cross section predicted by single barrier penetration model. Below the Coulomb barrier, it enhances the cross section, and this effect increases with the system’s size. It is argued that this behavior can be traced back to the increasing importance of Coulomb coupling with the charge of the collision partners. The sharp drop of the fusion cross section observed at deep sub-barrier energies is addressed and the theoretical approaches to this phenomenon are discussed. We then consider the reaction mechanisms involved in fusion reactions of weakly bound systems, paying particular attention to the calculations of complete and incomplete fusion available in the literature.

  20. Comparison of MRI-based and CT/MRI fusion-based postimplant dosimetric analysis of prostate brachytherapy

    International Nuclear Information System (INIS)

    Tanaka, Osamu; Hayashi, Shinya; Matsuo, Masayuki; Sakurai, Kota; Nakano, Masahiro; Maeda, Sunaho; Kajita, Kimihiro R.T.; Deguchi, Takashi; Hoshi, Hiroaki

    2006-01-01

    Purpose: The aim of this study was to compare the outcomes between magnetic resonance imaging (MRI)-based and computed tomography (CT)/MRI fusion-based postimplant dosimetry methods in permanent prostate brachytherapy. Methods and Materials: Between October 2004 and March 2006, a total of 52 consecutive patients with prostate cancer were treated by brachytherapy, and postimplant dosimetry was performed using CT/MRI fusion. The accuracy and reproducibility were prospectively compared between MRI-based dosimetry and CT/MRI fusion-based dosimetry based on the dose-volume histogram (DVH) related parameters as recommended by the American Brachytherapy Society. Results: The prostate volume was 15.97 ± 6.17 cc (mean ± SD) in MRI-based dosimetry, and 15.97 ± 6.02 cc in CT/MRI fusion-based dosimetry without statistical difference. The prostate V100 was 94.5% and 93.0% in MRI-based and CT/MRI fusion-based dosimetry, respectively, and the difference was statistically significant (p = 0.002). The prostate D90 was 119.4% and 114.4% in MRI-based and CT/MRI fusion-based dosimetry, respectively, and the difference was statistically significant (p = 0.004). Conclusion: Our current results suggested that, as with fusion images, MR images allowed accurate contouring of the organs, but they tended to overestimate the analysis of postimplant dosimetry in comparison to CT/MRI fusion images. Although this MRI-based dosimetric discrepancy was negligible, MRI-based dosimetry was acceptable and reproducible in comparison to CT-based dosimetry, because the difference between MRI-based and CT/MRI fusion-based results was smaller than that between CT-based and CT/MRI fusion-based results as previously reported