Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Directory of Open Access Journals (Sweden)
Yong Yang
2014-01-01
Full Text Available Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT, the fast discrete curvelet transform (FDCT, and the dual tree complex wavelet transform (DTCWT based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.
Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying
2016-12-20
The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.
Extended feature-fusion guidelines to improve image-based multi-modal biometrics
CSIR Research Space (South Africa)
Brown, Dane
2016-09-01
Full Text Available The feature-level, unlike the match score-level, lacks multi-modal fusion guidelines. This work demonstrates a practical approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint...
Multi-Modality Medical Image Fusion Based on Wavelet Analysis and Quality Evaluation
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Multi-modality medical image fusion has more and more important applications in medical image analysisand understanding. In this paper, we develop and apply a multi-resolution method based on wavelet pyramid to fusemedical images from different modalities such as PET-MRI and CT-MRI. In particular, we evaluate the different fusionresults when applying different selection rules and obtain optimum combination of fusion parameters.
Feature-Fusion Guidelines for Image-Based Multi-Modal Biometric Fusion
Directory of Open Access Journals (Sweden)
Dane Brown
2017-07-01
Full Text Available The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.
Multi-Modality Registration And Fusion Of Medical Image Data
International Nuclear Information System (INIS)
Kassak, P.; Vencko, D.; Cerovsky, I.
2008-01-01
Digitalisation of health care providing facilities allows US to maximize the usage of digital data from one patient obtained by various modalities. Complex view on to the problem can be achieved from the site of morphology as well as functionality. Multi-modal registration and fusion of medical image data is one of the examples that provides improved insight and allows more precise approach and treatment. (author)
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Experimental Study on Bioluminescence Tomography with Multimodality Fusion
Directory of Open Access Journals (Sweden)
Yujie Lv
2007-01-01
Full Text Available To verify the influence of a priori information on the nonuniqueness problem of bioluminescence tomography (BLT, the multimodality imaging fusion based BLT experiment is performed by multiview noncontact detection mode, which incorporates the anatomical information obtained by the microCT scanner and the background optical properties based on diffuse reflectance measurements. In the reconstruction procedure, the utilization of adaptive finite element methods (FEMs and a priori permissible source region refines the reconstructed results and improves numerical robustness and efficiency. The comparison between the absence and employment of a priori information shows that multimodality imaging fusion is essential to quantitative BLT reconstruction.
[Multimodal medical image registration using cubic spline interpolation method].
He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan
2007-12-01
Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.
International Nuclear Information System (INIS)
Uchida, Yoshitaka; Nakano, Yoshitada; Fujibuchi, Toshiou; Isobe, Tomoko; Kazama, Toshiki; Ito, Hisao
2006-01-01
We attempted image fusion between whole body PET and whole body MRI of thirty patients using a full-automatic mutual information (MI) -based multimodality image registration software and evaluated accuracy of this method and impact of the coregistrated imaging on diagnostic accuracy. For 25 of 30 fused images in body area, translating gaps were within 6 mm in all axes and rotating gaps were within 2 degrees around all axes. In head and neck area, considerably much gaps caused by difference of head inclination at imaging occurred in 16 patients, however these gaps were able to decrease by fused separately. In 6 patients, diagnostic accuracy using PET/MRI fused images was superior compared by PET image alone. This work shows that whole body FDG PET images and whole body MRI images can be automatically fused using MI-based multimodality image registration software accurately and this technique can add useful information when evaluating FDG PET images. (author)
Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A
2014-01-01
To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a
Optimal Face-Iris Multimodal Fusion Scheme
Directory of Open Access Journals (Sweden)
Omid Sharifi
2016-06-01
Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.
Directory of Open Access Journals (Sweden)
Abdallah Bengueddoudj
2017-05-01
Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.
Drug-related webpages classification based on multi-modal local decision fusion
Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin
2018-03-01
In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.
Multimodal medical information retrieval with unsupervised rank fusion.
Mourão, André; Martins, Flávio; Magalhães, João
2015-01-01
Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multimodality Image Fusion and Planning and Dose Delivery for Radiation Therapy
International Nuclear Information System (INIS)
Saw, Cheng B.; Chen Hungcheng; Beatty, Ron E.; Wagner, Henry
2008-01-01
Image-guided radiation therapy (IGRT) relies on the quality of fused images to yield accurate and reproducible patient setup prior to dose delivery. The registration of 2 image datasets can be characterized as hardware-based or software-based image fusion. Hardware-based image fusion is performed by hybrid scanners that combine 2 distinct medical imaging modalities such as positron emission tomography (PET) and computed tomography (CT) into a single device. In hybrid scanners, the patient maintains the same position during both studies making the fusion of image data sets simple. However, it cannot perform temporal image registration where image datasets are acquired at different times. On the other hand, software-based image fusion technique can merge image datasets taken at different times or with different medical imaging modalities. Software-based image fusion can be performed either manually, using landmarks, or automatically. In the automatic image fusion method, the best fit is evaluated using mutual information coefficient. Manual image fusion is typically performed at dose planning and for patient setup prior to dose delivery for IGRT. The fusion of orthogonal live radiographic images taken prior to dose delivery to digitally reconstructed radiographs will be presented. Although manual image fusion has been routinely used, the use of fiducial markers has shortened the fusion time. Automated image fusion should be possible for IGRT because the image datasets are derived basically from the same imaging modality, resulting in further shortening the fusion time. The advantages and limitations of both hardware-based and software-based image fusion methodologies are discussed
Directory of Open Access Journals (Sweden)
Kavitha SRINIVASAN
2014-09-01
Full Text Available Background: In the review of medical imaging techniques, an important fact that emerged is that radiologists and physicians still are in a need of high-resolution medical images with complementary information from different modalities to ensure efficient analysis. This requirement should have been sorted out using fusion techniques with the fused image being used in image-guided surgery, image-guided radiotherapy and non-invasive diagnosis. Aim: This paper focuses on Dual Channel Pulse Coupled Neural Network (PCNN Algorithm for fusion of multimodality brain images and the fused image is further analyzed using subjective (human perception and objective (statistical measures for the quality analysis. Material and Methods: The modalities used in fusion are CT, MRI with subtypes T1/T2/PD/GAD, PET and SPECT, since the information from each modality is complementary to one another. The objective measures selected for evaluation of fused image were: Information Entropy (IE - image quality, Mutual Information (MI – deviation in fused to the source images and Signal to Noise Ratio (SNR – noise level, for analysis. Eight sets of brain images with different modalities (T2 with T1, T2 with CT, PD with T2, PD with GAD, T2 with GAD, T2 with SPECT-Tc, T2 with SPECT-Ti, T2 with PET are chosen for experimental purpose and the proposed technique is compared with existing fusion methods such as the Average method, the Contrast pyramid, the Shift Invariant Discrete Wavelet Transform (SIDWT with Harr and the Morphological pyramid, using the selected measures to ascertain relative performance. Results: The IE value and SNR value of the fused image derived from dual channel PCNN is higher than other fusion methods, shows that the quality is better with less noise. Conclusion: The fused image resulting from the proposed method retains the contrast, shape and texture as in source images without false information or information loss.
FWFusion: Fuzzy Whale Fusion model for MRI multimodal image ...
Indian Academy of Sciences (India)
Hanmant Venketrao Patil
2018-03-14
Mar 14, 2018 ... consider multi-modality medical images other than PET and MRI images. ... cipal component averaging based on DWT for fusing CT-. MRI and MRI ..... sub-band LH of the fused image, the distance measure is given based on the ...... sustainable integrated dynamic ship routing and scheduling optimization.
Face-iris multimodal biometric scheme based on feature level fusion
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Multimodal Biometric System- Fusion Of Face And Fingerprint Biometrics At Match Score Fusion Level
Directory of Open Access Journals (Sweden)
Grace Wangari Mwaura
2017-04-01
Full Text Available Biometrics has developed to be one of the most relevant technologies used in Information Technology IT security. Unimodal biometric systems have a variety of problems which decreases the performance and accuracy of these system. One way to overcome the limitations of the unimodal biometric systems is through fusion to form a multimodal biometric system. Generally biometric fusion is defined as the use of multiple types of biometric data or ways of processing the data to improve the performance of biometric systems. This paper proposes to develop a model for fusion of the face and fingerprint biometric at the match score fusion level. The face and fingerprint unimodal in the proposed model are built using scale invariant feature transform SIFT algorithm and the hamming distance to measure the distance between key points. To evaluate the performance of the multimodal system the FAR and FRR of the multimodal are compared along those of the individual unimodal systems. It has been established that the multimodal has a higher accuracy of 92.5 compared to the face unimodal system at 90 while the fingerprint unimodal system is at 82.5.
International Nuclear Information System (INIS)
Beier, J.
2001-01-01
This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the software systems presented cover the majority of image processing applications necessary in radiology and were entirely developed, implemented and validated in the clinical routine of a university medical school. (orig.) [de
Reliability-Based Decision Fusion in Multimodal Biometric Verification Systems
Directory of Open Access Journals (Sweden)
Kryszczuk Krzysztof
2007-01-01
Full Text Available We present a methodology of reliability estimation in the multimodal biometric verification scenario. Reliability estimation has shown to be an efficient and accurate way of predicting and correcting erroneous classification decisions in both unimodal (speech, face, online signature and multimodal (speech and face systems. While the initial research results indicate the high potential of the proposed methodology, the performance of the reliability estimation in a multimodal setting has not been sufficiently studied or evaluated. In this paper, we demonstrate the advantages of using the unimodal reliability information in order to perform an efficient biometric fusion of two modalities. We further show the presented method to be superior to state-of-the-art multimodal decision-level fusion schemes. The experimental evaluation presented in this paper is based on the popular benchmarking bimodal BANCA database.
Multimodality imaging techniques.
Martí-Bonmatí, Luis; Sopena, Ramón; Bartumeus, Paula; Sopena, Pablo
2010-01-01
In multimodality imaging, the need to combine morphofunctional information can be approached by either acquiring images at different times (asynchronous), and fused them through digital image manipulation techniques or simultaneously acquiring images (synchronous) and merging them automatically. The asynchronous post-processing solution presents various constraints, mainly conditioned by the different positioning of the patient in the two scans acquired at different times in separated machines. The best solution to achieve consistency in time and space is obtained by the synchronous image acquisition. There are many multimodal technologies in molecular imaging. In this review we will focus on those multimodality image techniques more commonly used in the field of diagnostic imaging (SPECT-CT, PET-CT) and new developments (as PET-MR). The technological innovations and development of new tracers and smart probes are the main key points that will condition multimodality image and diagnostic imaging professionals' future. Although SPECT-CT and PET-CT are standard in most clinical scenarios, MR imaging has some advantages, providing excellent soft-tissue contrast and multidimensional functional, structural and morphological information. The next frontier is to develop efficient detectors and electronics systems capable of detecting two modality signals at the same time. Not only PET-MR but also MR-US or optic-PET will be introduced in clinical scenarios. Even more, MR diffusion-weighted, pharmacokinetic imaging, spectroscopy or functional BOLD imaging will merge with PET tracers to further increase molecular imaging as a relevant medical discipline. Multimodality imaging techniques will play a leading role in relevant clinical applications. The development of new diagnostic imaging research areas, mainly in the field of oncology, cardiology and neuropsychiatry, will impact the way medicine is performed today. Both clinical and experimental multimodality studies, in
Directory of Open Access Journals (Sweden)
Wenkai Zhang
2017-12-01
Full Text Available In recent years, Fully Convolutional Networks (FCN have led to a great improvement of semantic labeling for various applications including multi-modal remote sensing data. Although different fusion strategies have been reported for multi-modal data, there is no in-depth study of the reasons of performance limits. For example, it is unclear, why an early fusion of multi-modal data in FCN does not lead to a satisfying result. In this paper, we investigate the contribution of individual layers inside FCN and propose an effective fusion strategy for the semantic labeling of color or infrared imagery together with elevation (e.g., Digital Surface Models. The sensitivity and contribution of layers concerning classes and multi-modal data are quantified by recall and descent rate of recall in a multi-resolution model. The contribution of different modalities to the pixel-wise prediction is analyzed explaining the reason of the poor performance caused by the plain concatenation of different modalities. Finally, based on the analysis an optimized scheme for the fusion of layers with image and elevation information into a single FCN model is derived. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset (infrared and RGB imagery as well as elevation and the Potsdam dataset (RGB imagery and elevation. Comprehensive evaluations demonstrate the potential of the proposed approach.
Multimodal Registration and Fusion for 3D Thermal Imaging
Directory of Open Access Journals (Sweden)
Moulay A. Akhloufi
2015-01-01
Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.
A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM
Directory of Open Access Journals (Sweden)
Arvind Selwal
2016-09-01
Full Text Available Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric modalities of the user and protect the raw images. To minimize spoof attacks on biometric systems by unauthorised users one of the solutions is to use multi-biometric systems. Multi-modal biometric system works by using fusion technique to merge feature templates generated from different modalities of the human. In this work a new scheme is proposed to secure template during feature fusion level. Scheme is based on union operation of fuzzy relations of templates of modalities during fusion process of multimodal biometric systems. This approach serves dual purpose of feature fusion as well as transformation of templates into a single secured non invertible template. The proposed technique is cancelable and experimentally tested on a bimodal biometric system comprising of fingerprint and hand geometry. Developed scheme removes the problem of an attacker learning the original minutia position in fingerprint and various measurements of hand geometry. Given scheme provides improved performance of the system with reduction in false accept rate and improvement in genuine accept rate.
[A preliminary research on multi-source medical image fusion].
Kang, Yuanyuan; Li, Bin; Tian, Lianfang; Mao, Zongyuan
2009-04-01
Multi-modal medical image fusion has important value in clinical diagnosis and treatment. In this paper, the multi-resolution analysis of Daubechies 9/7 Biorthogonal Wavelet Transform is introduced for anatomical and functional image fusion, then a new fusion algorithm with the combination of local standard deviation and energy as texture measurement is presented. At last, a set of quantitative evaluation criteria is given. Experiments show that both anatomical and metabolism information can be obtained effectively, and both the edge and texture features can be reserved successfully. The presented algorithm is more effective than the traditional algorithms.
CT, MRI and PET image fusion using the ProSoma 3D simulation software
International Nuclear Information System (INIS)
Dalah, E.; Bradley, D.A.; Nisbet, A.; Reise, S.
2008-01-01
Full text: Multi-modality imaging is involved in almost all oncology applications focusing on the extent of disease and target volume delineation. Commercial image fusion software packages are becoming available but require comprehensive evaluation to ensure reliability of fusion and the underpinning registration algorithm particularly for radiotherapy. The present work seeks to assess such accuracy for a number of available registration methods provided by the commercial package ProSoma. A NEMA body phantom was used in evaluating CT, MR and PET images. In addition, discussion is provided concerning the choice and geometry of fiducial markers in phantom studies and the effect of window-level on target size, in particular in regard to the application of multi modality imaging in treatment planning. In general, the accuracy of fusion of multi-modality images was within 0.5-1.5 mm of actual feature diameters and < 2 ml volume of actual values, particularly in CT images. (author)
A data fusion environment for multimodal and multi-informational neuronavigation.
Jannin, P; Fleig, O J; Seigneuret, E; Grova, C; Morandi, X; Scarabin, J M
2000-01-01
Part of the planning and performance of neurosurgery consists of determining target areas, areas to be avoided, landmark areas, and trajectories, all of which are components of the surgical script. Nowadays, neurosurgeons have access to multimodal medical imaging to support the definition of the surgical script. The purpose of this paper is to present a software environment developed by the authors that allows full multimodal and multi-informational planning as well as neuronavigation for epilepsy and tumor surgery. We have developed a data fusion environment dedicated to neuronavigation around the Surgical Microscope Neuronavigator system (Carl Zeiss, Oberkochen, Germany). This environment includes registration, segmentation, 3D visualization, and interaction-applied tools. It provides the neuronavigation system with the multimodal information involved in the definition of the surgical script: lesional areas, sulci, ventricles segmented from magnetic resonance imaging (MRI), vessels segmented from magnetic resonance angiography (MRA), functional areas from magneto-encephalography (MEG), and functional magnetic resonance imaging (fMRI) for somatosensory, motor, or language activation. These data are considered to be relevant for the performance of the surgical procedure. The definition of each entity results from the same procedure: registration to the anatomical MRI data set (defined as the reference data set), segmentation, fused 3D display, selection of the relevant entities for the surgical step, encoding in 3D surface-based representation, and storage of the 3D surfaces in a file recognized by the neuronavigation software (STP 3.4, Leibinger; Freiburg, Germany). Multimodal neuronavigation is illustrated with two clinical cases for which multimodal information was introduced into the neuronavigation system. Lesional areas were used to define and follow the surgical path, sulci and vessels helped identify the anatomical environment of the surgical field, and
Visual tracking for multi-modality computer-assisted image guidance
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Inorganic Nanoparticles for Multimodal Molecular Imaging
Directory of Open Access Journals (Sweden)
Magdalena Swierczewska
2011-01-01
Full Text Available Multimodal molecular imaging can offer a synergistic improvement of diagnostic ability over a single imaging modality. Recent development of hybrid imaging systems has profoundly impacted the pool of available multimodal imaging probes. In particular, much interest has been focused on biocompatible, inorganic nanoparticle-based multimodal probes. Inorganic nanoparticles offer exceptional advantages to the field of multimodal imaging owing to their unique characteristics, such as nanometer dimensions, tunable imaging properties, and multifunctionality. Nanoparticles mainly based on iron oxide, quantum dots, gold, and silica have been applied to various imaging modalities to characterize and image specific biologic processes on a molecular level. A combination of nanoparticles and other materials such as biomolecules, polymers, and radiometals continue to increase functionality for in vivo multimodal imaging and therapeutic agents. In this review, we discuss the unique concepts, characteristics, and applications of the various multimodal imaging probes based on inorganic nanoparticles.
Ray, Pritha
2011-04-01
Development and marketing of new drugs require stringent validation that are expensive and time consuming. Non-invasive multimodality molecular imaging using reporter genes holds great potential to expedite these processes at reduced cost. New generations of smarter molecular imaging strategies such as Split reporter, Bioluminescence resonance energy transfer, Multimodality fusion reporter technologies will further assist to streamline and shorten the drug discovery and developmental process. This review illustrates the importance and potential of molecular imaging using multimodality reporter genes in drug development at preclinical phases.
Mohammadi-Nejad, Ali-Reza; Hossein-Zadeh, Gholam-Ali; Soltanian-Zadeh, Hamid
2017-07-01
Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer's disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects with a p-value of less than 1×10 -6 . Furthermore, we have depicted the brain mapping of functional areas that are most correlated with the anatomical changes in AD patients relative to HC subjects.
Multimodal fluorescence imaging spectroscopy
Stopel, Martijn H W; Blum, Christian; Subramaniam, Vinod; Engelborghs, Yves; Visser, Anthonie J.W.G.
2014-01-01
Multimodal fluorescence imaging is a versatile method that has a wide application range from biological studies to materials science. Typical observables in multimodal fluorescence imaging are intensity, lifetime, excitation, and emission spectra which are recorded at chosen locations at the sample.
Image fusion using MIM software via picture archiving and communication system
International Nuclear Information System (INIS)
Gu Zhaoxiang; Jiang Maosong
2001-01-01
The preliminary studies of the multimodality image registration and fusion were performed using an image fusion software and a picture archiving and communication system (PACS) to explore the methodology. Original image voluminal data were acquired with a CT scanner, MR and dual-head coincidence SPECT, respectively. The data sets from all imaging devices were queried, retrieved, transferred and accessed via DICOM PACS. The image fusion was performed at the SPECT ICON work-station, where the MIM (Medical Image Merge) fusion software was installed. The images were created by re-slicing original volume on the fly. The image volumes were aligned by translation and rotation of these view ports with respect to the original volume orientation. The transparency factor and contrast were adjusted in order that both volumes can be visualized in the merged images. The image volume data of CT, MR and nuclear medicine were transferred, accessed and loaded via PACS successfully. The perfect fused images of chest CT/ 18 F-FDG and brain MR/SPECT were obtained. These results showed that image fusion technique using PACS was feasible and practical. Further experimentation and larger validation studies were needed to explore the full potential of the clinical use
Directory of Open Access Journals (Sweden)
Hui Huang
2017-01-01
Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.
A Selective Review of Multimodal Fusion Methods in Schizophrenia
Directory of Open Access Journals (Sweden)
Jing eSui
2012-02-01
Full Text Available Schizophrenia (SZ is one of the most cryptic and costly mental disorders in terms of human suffering and societal expenditure (van Os and Kapur, 2009. Though strong evidences for functional, structural and genetic abnormalities associated with this disease exist, there is yet no replicable finding which has proven accurate enough to be useful in clinical decision making (Fornito et al., 2009, and its diagnosis relies primarily upon symptom assessment (Williams et al., 2010a. It is likely in part that the lack of consistent neuroimaging findings is because most models favor only one data type or do not combine data from different imaging modalities effectively, thus missing potentially important differences which are only partially detected by each modality (Calhoun et al., 2006a. It is becoming increasingly clear that multi-modal fusion, a technique which takes advantage of the fact that each modality provides a limited view of the brain/gene and may uncover hidden relationships, is an important tool to help unravel the black box of schizophrenia. In this review paper, we survey a number of multimodal fusion applications which enable us to study the schizophrenia macro-connectome, including brain functional, structural and genetic aspects and may help us understand the disorder in a more comprehensive and integrated manner. We also provide a table that characterizes these applications by the methods used and compare these methods in detail, especially for multivariate models, which may serve as a valuable reference that helps readers select an appropriate method based on a given research.
Image Fusion of CT and MR with Sparse Representation in NSST Domain
Directory of Open Access Journals (Sweden)
Chenhui Qiu
2017-01-01
Full Text Available Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR- based approach. And the dynamic group sparsity recovery (DGSR algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.
ANALYSIS OF MULTIMODAL FUSION TECHNIQUES FOR AUDIO-VISUAL SPEECH RECOGNITION
Directory of Open Access Journals (Sweden)
D.V. Ivanko
2016-05-01
Full Text Available The paper deals with analytical review, covering the latest achievements in the field of audio-visual (AV fusion (integration of multimodal information. We discuss the main challenges and report on approaches to address them. One of the most important tasks of the AV integration is to understand how the modalities interact and influence each other. The paper addresses this problem in the context of AV speech processing and speech recognition. In the first part of the review we set out the basic principles of AV speech recognition and give the classification of audio and visual features of speech. Special attention is paid to the systematization of the existing techniques and the AV data fusion methods. In the second part we provide a consolidated list of tasks and applications that use the AV fusion based on carried out analysis of research area. We also indicate used methods, techniques, audio and video features. We propose classification of the AV integration, and discuss the advantages and disadvantages of different approaches. We draw conclusions and offer our assessment of the future in the field of AV fusion. In the further research we plan to implement a system of audio-visual Russian continuous speech recognition using advanced methods of multimodal fusion.
Multimodal Biometric System- Fusion Of Face And Fingerprint Biometrics At Match Score Fusion Level
Grace Wangari Mwaura; Prof. Waweru Mwangi; Dr. Calvins Otieno
2017-01-01
Biometrics has developed to be one of the most relevant technologies used in Information Technology IT security. Unimodal biometric systems have a variety of problems which decreases the performance and accuracy of these system. One way to overcome the limitations of the unimodal biometric systems is through fusion to form a multimodal biometric system. Generally biometric fusion is defined as the use of multiple types of biometric data or ways of processing the data to improve the performanc...
Recent developments in multimodality fluorescence imaging probes
Directory of Open Access Journals (Sweden)
Jianhong Zhao
2018-05-01
Full Text Available Multimodality optical imaging probes have emerged as powerful tools that improve detection sensitivity and accuracy, important in disease diagnosis and treatment. In this review, we focus on recent developments of optical fluorescence imaging (OFI probe integration with other imaging modalities such as X-ray computed tomography (CT, magnetic resonance imaging (MRI, positron emission tomography (PET, single-photon emission computed tomography (SPECT, and photoacoustic imaging (PAI. The imaging technologies are briefly described in order to introduce the strengths and limitations of each techniques and the need for further multimodality optical imaging probe development. The emphasis of this account is placed on how design strategies are currently implemented to afford physicochemically and biologically compatible multimodality optical fluorescence imaging probes. We also present studies that overcame intrinsic disadvantages of each imaging technique by multimodality approach with improved detection sensitivity and accuracy. KEY WORDS: Optical imaging, Fluorescence, Multimodality, Near-infrared fluorescence, Nanoprobe, Computed tomography, Magnetic resonance imaging, Positron emission tomography, Single-photon emission computed tomography, Photoacoustic imaging
Quality dependent fusion of intramodal and multimodal biometric experts
Kittler, J.; Poh, N.; Fatukasi, O.; Messer, K.; Kryszczuk, K.; Richiardi, J.; Drygajlo, A.
2007-04-01
We address the problem of score level fusion of intramodal and multimodal experts in the context of biometric identity verification. We investigate the merits of confidence based weighting of component experts. In contrast to the conventional approach where confidence values are derived from scores, we use instead raw measures of biometric data quality to control the influence of each expert on the final fused score. We show that quality based fusion gives better performance than quality free fusion. The use of quality weighted scores as features in the definition of the fusion functions leads to further improvements. We demonstrate that the achievable performance gain is also affected by the choice of fusion architecture. The evaluation of the proposed methodology involves 6 face and one speech verification experts. It is carried out on the XM2VTS data base.
Facile Fabrication of Animal-Specific Positioning Molds For Multi-modality Molecular Imaging
International Nuclear Information System (INIS)
Park, Jeong Chan; Oh, Ji Eun; Woo, Seung Tae
2008-01-01
Recently multi-modal imaging system has become widely adopted in molecular imaging. We tried to fabricate animal-specific positioning molds for PET/MR fusion imaging using easily available molding clay and rapid foam. The animal-specific positioning molds provide immobilization and reproducible positioning of small animal. Herein, we have compared fiber-based molding clay with rapid foam in fabricating the molds of experimental animal. The round bottomed-acrylic frame, which fitted into microPET gantry, was prepared at first. The experimental mice was anesthetized and placed on the mold for positioning. Rapid foam and fiber-based clay were used to fabricate the mold. In case of both rapid foam and the clay, the experimental animal needs to be pushed down smoothly into the mold for positioning. However, after the mouse was removed, the fabricated clay needed to be dried completely at 60 .deg. C in oven overnight for hardening. Four sealed pipe tips containing [ 18 F]FDG solution were used as fiduciary markers. After injection of [ 18 F]FDG via tail vein, microPET scanning was performed. Successively, MRI scanning was followed in the same animal. Animal-specific positioning molds were fabricated using rapid foam and fiber-based molding clay for multimodality imaging. Functional and anatomical images were obtained with microPET and MRI, respectively. The fused PET/MR images were obtained using freely available AMIDE program. Animal-specific molds were successfully prepared using easily available rapid foam, molding clay and disposable pipet tips. Thanks to animal-specific molds, fusion images of PET and MR were co-registered with negligible misalignment
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying
2014-05-01
A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.
Modeling decision-making in single- and multi-modal medical images
Canosa, R. L.; Baum, K. G.
2009-02-01
This research introduces a mode-specific model of visual saliency that can be used to highlight likely lesion locations and potential errors (false positives and false negatives) in single-mode PET and MRI images and multi-modal fused PET/MRI images. Fused-modality digital images are a relatively recent technological improvement in medical imaging; therefore, a novel component of this research is to characterize the perceptual response to these fused images. Three different fusion techniques were compared to single-mode displays in terms of observer error rates using synthetic human brain images generated from an anthropomorphic phantom. An eye-tracking experiment was performed with naÃve (non-radiologist) observers who viewed the single- and multi-modal images. The eye-tracking data allowed the errors to be classified into four categories: false positives, search errors (false negatives never fixated), recognition errors (false negatives fixated less than 350 milliseconds), and decision errors (false negatives fixated greater than 350 milliseconds). A saliency model consisting of a set of differentially weighted low-level feature maps is derived from the known error and ground truth locations extracted from a subset of the test images for each modality. The saliency model shows that lesion and error locations attract visual attention according to low-level image features such as color, luminance, and texture.
Rababaah, Haroun; Shirkhodaie, Amir
2009-04-01
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.
Gambhir, Sanjiv [Portola Valley, CA; Pritha, Ray [Mountain View, CA
2011-06-07
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Modality prediction of biomedical literature images using multimodal feature representation
Directory of Open Access Journals (Sweden)
Pelka, Obioma
2016-08-01
Full Text Available This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.
J. Sui (Jing); H. He (Hao); G. Pearlson (Godfrey); T. Adali (Tülay); K.A. Kiehl (Kent ); Q. Yu (Qingbao); V.P. Clark; E. Castro (Elena); T.J.H. White (Tonya); B.A. Mueller (Bryon ); B.C. Ho (Beng ); N.C. Andreasen; V.D. Calhoun (Vince)
2013-01-01
textabstractMultimodal fusion is an effective approach to better understand brain diseases. However, most such instances have been limited to pair-wise fusion; because there are often more than two imaging modalities available per subject, there is a need for approaches that can combine multiple
Image fusion in open-architecture quality-oriented nuclear medicine and radiology departments
Energy Technology Data Exchange (ETDEWEB)
Pohjonen, H
1997-12-31
Imaging examinations of patients belong to the most widely used diagnostic procedures in hospitals. Multimodal digital imaging is becoming increasingly common in many fields of diagnosis and therapy planning. Patients are frequently examined with magnetic resonance imaging (MRI), X-ray computed tomography (CT) or ultrasound imaging (US) in addition to single photon (SPET) or positron emission tomography (PET). The aim of the study was to provide means for improving the quality of the whole imaging and viewing chain in nuclear medicine and radiology. The specific aims were: (1) to construct and test a model for a quality assurance system in radiology based on ISO standards, (2) to plan a Dicom based image network for fusion purposes using ATM and Ethernet technologies, (3) to test different segmentation methods in quantitative SPET, (4) to study and implement a registration and visualisation method for multimodal imaging, (5) to apply the developed method in selected clinical brain and abdominal images, and (6) to investigate the accuracy of the registration procedure for brain SPET and MRI 90 refs. The thesis includes also six previous publications by author
Image fusion in open-architecture quality-oriented nuclear medicine and radiology departments
Energy Technology Data Exchange (ETDEWEB)
Pohjonen, H
1998-12-31
Imaging examinations of patients belong to the most widely used diagnostic procedures in hospitals. Multimodal digital imaging is becoming increasingly common in many fields of diagnosis and therapy planning. Patients are frequently examined with magnetic resonance imaging (MRI), X-ray computed tomography (CT) or ultrasound imaging (US) in addition to single photon (SPET) or positron emission tomography (PET). The aim of the study was to provide means for improving the quality of the whole imaging and viewing chain in nuclear medicine and radiology. The specific aims were: (1) to construct and test a model for a quality assurance system in radiology based on ISO standards, (2) to plan a Dicom based image network for fusion purposes using ATM and Ethernet technologies, (3) to test different segmentation methods in quantitative SPET, (4) to study and implement a registration and visualisation method for multimodal imaging, (5) to apply the developed method in selected clinical brain and abdominal images, and (6) to investigate the accuracy of the registration procedure for brain SPET and MRI 90 refs. The thesis includes also six previous publications by author
Image fusion in open-architecture quality-oriented nuclear medicine and radiology departments
International Nuclear Information System (INIS)
Pohjonen, H.
1997-01-01
Imaging examinations of patients belong to the most widely used diagnostic procedures in hospitals. Multimodal digital imaging is becoming increasingly common in many fields of diagnosis and therapy planning. Patients are frequently examined with magnetic resonance imaging (MRI), X-ray computed tomography (CT) or ultrasound imaging (US) in addition to single photon (SPET) or positron emission tomography (PET). The aim of the study was to provide means for improving the quality of the whole imaging and viewing chain in nuclear medicine and radiology. The specific aims were: (1) to construct and test a model for a quality assurance system in radiology based on ISO standards, (2) to plan a Dicom based image network for fusion purposes using ATM and Ethernet technologies, (3) to test different segmentation methods in quantitative SPET, (4) to study and implement a registration and visualisation method for multimodal imaging, (5) to apply the developed method in selected clinical brain and abdominal images, and (6) to investigate the accuracy of the registration procedure for brain SPET and MRI
Fusion of SPECT/TC images: Usefulness and benefits in degenerative spinal cord pathology
International Nuclear Information System (INIS)
Ocampo, Monica; Ucros, Gonzalo; Bermudez, Sonia; Morillo, Anibal; Rodriguez, Andres
2005-01-01
The objectives are to compare CT and SPECT bone scintigraphy evaluated independently with SPECT-CT fusion images in patients with known degenerative spinal pathology. To demonstrate the clinical usefulness of CT and SPECT fusion images. Materials and methods: Thirty-one patients with suspected degenerative spinal disease were evaluated with thin-slice, non-angled helical CT and bone scintigrams with single photon emission computed tomography (SPECT), both with multiplanar reconstructions within a 24-hour period After independent evaluation by a nuclear medicine specialist and a radiologist, multimodality image fusion software was used to merge the CT and SPECT studies and a final consensus interpretation of the combined images was obtained. Results: Thirty-two SPECT bone scintigraphy images, helical CT studies and SPECT-CT fusion images were obtained for 31 patients with degenerative spinal disease. The results of the bone scintigraphy and CT scans were in agreement in 17 pairs of studies (53.12%). In these studies image fusion did not provide additional information on the location or extension of the lesions. In 11 of the study pairs (34.2%), the information obtained was not in agreement between scintigraphy and CT studies: CT images demonstrated several abnormalities, whereas the SPECT images showed only one dominant lesion, or the SPECT images did not provide enough information for anatomical localization. In these cases image fusion helped establish the precise localization of the most clinically significant lesion, which matched the lesion with the greatest uptake. In 4 studies (12.5%) the CT and SPECT images were not in agreement: CT and SPECT images showed different information (normal scintigraphy, abnormal CT), thus leading to inconclusive fusion images. Conclusion: The use of CT-SPECT fusion images in degenerative spinal disease allows for the integration of anatomic detail with physiologic and functional information. CT-SPECT fusion improves the
Multimodal biometric system using rank-level fusion approach.
Monwar, Md Maruf; Gavrilova, Marina L
2009-08-01
In many real-world applications, unimodal biometric systems often face significant limitations due to sensitivity to noise, intraclass variability, data quality, nonuniversality, and other factors. Attempting to improve the performance of individual matchers in such situations may not prove to be highly effective. Multibiometric systems seek to alleviate some of these problems by providing multiple pieces of evidence of the same identity. These systems help achieve an increase in performance that may not be possible using a single-biometric indicator. This paper presents an effective fusion scheme that combines information presented by multiple domain experts based on the rank-level fusion integration method. The developed multimodal biometric system possesses a number of unique qualities, starting from utilizing principal component analysis and Fisher's linear discriminant methods for individual matchers (face, ear, and signature) identity authentication and utilizing the novel rank-level fusion method in order to consolidate the results obtained from different biometric matchers. The ranks of individual matchers are combined using the highest rank, Borda count, and logistic regression approaches. The results indicate that fusion of individual modalities can improve the overall performance of the biometric system, even in the presence of low quality data. Insights on multibiometric design using rank-level fusion and its performance on a variety of biometric databases are discussed in the concluding section.
Compositional-prior-guided image reconstruction algorithm for multi-modality imaging
Fang, Qianqian; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.
2010-01-01
The development of effective multi-modality imaging methods typically requires an efficient information fusion model, particularly when combining structural images with a complementary imaging modality that provides functional information. We propose a composition-based image segmentation method for X-ray digital breast tomosynthesis (DBT) and a structural-prior-guided image reconstruction for a combined DBT and diffuse optical tomography (DOT) breast imaging system. Using the 3D DBT images from 31 clinically measured healthy breasts, we create an empirical relationship between the X-ray intensities for adipose and fibroglandular tissue. We use this relationship to then segment another 58 healthy breast DBT images from 29 subjects into compositional maps of different tissue types. For each breast, we build a weighted-graph in the compositional space and construct a regularization matrix to incorporate the structural priors into a finite-element-based DOT image reconstruction. Use of the compositional priors enables us to fuse tissue anatomy into optical images with less restriction than when using a binary segmentation. This allows us to recover the image contrast captured by DOT but not by DBT. We show that it is possible to fine-tune the strength of the structural priors by changing a single regularization parameter. By estimating the optical properties for adipose and fibroglandular tissue using the proposed algorithm, we found the results are comparable or superior to those estimated with expert-segmentations, but does not involve the time-consuming manual selection of regions-of-interest. PMID:21258460
Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.
1994-09-01
A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.
Spinal fusion-hardware construct: Basic concepts and imaging review
Nouh, Mohamed Ragab
2012-01-01
The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979
Multimodality imaging: transfer and fusion of SPECT and MRI data
International Nuclear Information System (INIS)
Knesaurek, K.
1994-01-01
Image fusion is a technique which offers the best of both worlds. It unites the two basic types of medical images: functional body images(PET or SPECT scans), which provide physiological information, and structural images (CT or MRI), which provide an anatomic map of the body. Control-point based registration technique was developed and used. Tc-99m point sources were used as external markers in SPECT studies while, for MRI and CT imaging only anatomic landmarks were used as a control points. The MRI images were acquired on GE Signa 1.2 system and CT data on a GE 9800 scanner. SPECT studies were performed 1h after intravenous injection of the 740 MBq of the Tc-99m-HMPAO on the triple-headed TRIONIX gamma camera. B-spline and bilinear interpolation were used for the rotation, scaling and translation of the images. In the process of creation of a single composite image, in order to retain information from the individual images, MRI (or CT) image was scaled to one color range and a SPECT image to another. In some situations the MRI image was kept black-and-white while the SPECT image was pasted on top of it in 'opaque' mode. Most errors which propagate through the matching process are due to sample size, imperfection of the acquisition system, noise and interpolations used. Accuracy of the registration was investigated by SPECT-CT study performed on a phantom study. The results has shown that accuracy of the matching process is better, or at worse, equal to 2 mm. (author)
Directory of Open Access Journals (Sweden)
A. H. Ahrari
2017-09-01
Full Text Available Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar and different decomposition filters (mean.linear,ma,min and rand for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.
Thaden, Jeremy J; Sanon, Saurabh; Geske, Jeffrey B; Eleid, Mackram F; Nijhof, Niels; Malouf, Joseph F; Rihal, Charanjit S; Bruce, Charles J
2016-06-01
There has been significant growth in the volume and complexity of percutaneous structural heart procedures in the past decade. Increasing procedural complexity and accompanying reliance on multimodality imaging have fueled the development of fusion imaging to facilitate procedural guidance. The first clinically available system capable of echocardiographic and fluoroscopic fusion for real-time guidance of structural heart procedures was approved by the US Food and Drug Administration in 2012. Echocardiographic-fluoroscopic fusion imaging combines the precise catheter and device visualization of fluoroscopy with the soft tissue anatomy and color flow Doppler information afforded by echocardiography in a single image. This allows the interventionalist to perform precise catheter manipulations under fluoroscopy guidance while visualizing critical tissue anatomy provided by echocardiography. However, there are few data available addressing this technology's strengths and limitations in routine clinical practice. The authors provide a critical review of currently available echocardiographic-fluoroscopic fusion imaging for guidance of structural heart interventions to highlight its strengths, limitations, and potential clinical applications and to guide further research into value of this emerging technology. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Zou, Qiushun; Yu, Tianbao; Liu, Jiangtao; Wang, Tongbiao; Liao, Qinghua; Liu, Nianhua
2015-01-01
We report an acoustic multimode interference effect and self-imaging phenomena in an acoustic multimode waveguide system which consists of M parallel phononic crystal waveguides (M-PnCWs). Results show that the self-imaging principle remains applicable for acoustic waveguides just as it does for optical multimode waveguides. To achieve the dispersions and replicas of the input acoustic waves produced along the propagation direction, we performed the finite element method on M-PnCWs, which support M guided modes within the target frequency range. The simulation results show that single images (including direct and mirrored images) and N-fold images (N is an integer) are identified along the propagation direction with asymmetric and symmetric incidence discussed separately. The simulated positions of the replicas agree well with the calculated values that are theoretically decided by self-imaging conditions based on the guided mode propagation analysis. Moreover, the potential applications based on this self-imaging effect for acoustic wavelength de-multiplexing and beam splitting in the acoustic field are also presented. (paper)
PET-MRI and multimodal cancer imaging
International Nuclear Information System (INIS)
Wang Taisong; Zhao Jinhua; Song Jianhua
2011-01-01
Multimodality imaging, specifically PET-CT, brought a new perspective into the fields of clinical imaging. Clinical cases have shown that PET-CT has great value in clinical diagnosis and experimental research. But PET-CT still bears some limitations. A major drawback is that CT provides only limited soft tissue contrast and exposes the patient to a significant radiation dose. MRI overcome these limitations, it has excellent soft tissue contrast, high temporal and spatial resolution and no radiation damage. Additionally, since MRI provides also functional information, PET-MRI will show a new direction of multimodality imaging in the future. (authors)
Multimodality imaging of pulmonary infarction
International Nuclear Information System (INIS)
Bray, T.J.P.; Mortensen, K.H.; Gopalan, D.
2014-01-01
Highlights: • A plethora of pulmonary and systemic disorders, often associated with grave outcomes, may cause pulmonary infarction. • A stereotypical infarct is a peripheral wedge shaped pleurally based opacity but imaging findings can be highly variable. • Multimodality imaging is key to diagnosing the presence, aetiology and complications of pulmonary infarction. • Multimodality imaging of pulmonary infarction together with any ancillary features often guide to early targeted treatment. • CT remains the principal imaging modality with MRI increasingly used alongside nuclear medicine studies and ultrasound. - Abstract: The impact of absent pulmonary arterial and venous flow on the pulmonary parenchyma depends on a host of factors. These include location of the occlusive insult, the speed at which the occlusion develops and the ability of the normal dual arterial supply to compensate through increased bronchial arterial flow. Pulmonary infarction occurs when oxygenation is cut off secondary to sudden occlusion with lack of recruitment of the dual supply arterial system. Thromboembolic disease is the commonest cause of such an insult but a whole range of disease processes intrinsic and extrinsic to the pulmonary arterial and venous lumen may also result in infarcts. Recognition of the presence of infarction can be challenging as imaging manifestations often differ from the classically described wedge shaped defect and a number of weighty causes need consideration. This review highlights aetiologies and imaging appearances of pulmonary infarction, utilising cases to illustrate the essential role of a multimodality imaging approach in order to arrive at the appropriate diagnosis
Multimodality imaging of pulmonary infarction
Energy Technology Data Exchange (ETDEWEB)
Bray, T.J.P., E-mail: timothyjpbray@gmail.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); Mortensen, K.H., E-mail: mortensen@doctors.org.uk [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); University Department of Radiology, Addenbrookes Hospital, Cambridge University Hospitals NHS Foundation Trust, Hills Road, Box 318, Cambridge CB2 0QQ (United Kingdom); Gopalan, D., E-mail: deepa.gopalan@btopenworld.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom)
2014-12-15
Highlights: • A plethora of pulmonary and systemic disorders, often associated with grave outcomes, may cause pulmonary infarction. • A stereotypical infarct is a peripheral wedge shaped pleurally based opacity but imaging findings can be highly variable. • Multimodality imaging is key to diagnosing the presence, aetiology and complications of pulmonary infarction. • Multimodality imaging of pulmonary infarction together with any ancillary features often guide to early targeted treatment. • CT remains the principal imaging modality with MRI increasingly used alongside nuclear medicine studies and ultrasound. - Abstract: The impact of absent pulmonary arterial and venous flow on the pulmonary parenchyma depends on a host of factors. These include location of the occlusive insult, the speed at which the occlusion develops and the ability of the normal dual arterial supply to compensate through increased bronchial arterial flow. Pulmonary infarction occurs when oxygenation is cut off secondary to sudden occlusion with lack of recruitment of the dual supply arterial system. Thromboembolic disease is the commonest cause of such an insult but a whole range of disease processes intrinsic and extrinsic to the pulmonary arterial and venous lumen may also result in infarcts. Recognition of the presence of infarction can be challenging as imaging manifestations often differ from the classically described wedge shaped defect and a number of weighty causes need consideration. This review highlights aetiologies and imaging appearances of pulmonary infarction, utilising cases to illustrate the essential role of a multimodality imaging approach in order to arrive at the appropriate diagnosis.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Multimodal quantitative phase and fluorescence imaging of cell apoptosis
Fu, Xinye; Zuo, Chao; Yan, Hao
2017-06-01
Fluorescence microscopy, utilizing fluorescence labeling, has the capability to observe intercellular changes which transmitted and reflected light microscopy techniques cannot resolve. However, the parts without fluorescence labeling are not imaged. Hence, the processes simultaneously happen in these parts cannot be revealed. Meanwhile, fluorescence imaging is 2D imaging where information in the depth is missing. Therefore the information in labeling parts is also not complete. On the other hand, quantitative phase imaging is capable to image cells in 3D in real time through phase calculation. However, its resolution is limited by the optical diffraction and cannot observe intercellular changes below 200 nanometers. In this work, fluorescence imaging and quantitative phase imaging are combined to build a multimodal imaging system. Such system has the capability to simultaneously observe the detailed intercellular phenomenon and 3D cell morphology. In this study the proposed multimodal imaging system is used to observe the cell behavior in the cell apoptosis. The aim is to highlight the limitations of fluorescence microscopy and to point out the advantages of multimodal quantitative phase and fluorescence imaging. The proposed multimodal quantitative phase imaging could be further applied in cell related biomedical research, such as tumor.
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir
2017-01-01
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659
Penning, H.L.H. de; Avila Garcez, A. d; Meyer, J.J.C.
2013-01-01
Deep Boltzmann Machines (DBM) have been used as a computational cognitive model in various AI-related research and applications, notably in computational vision and multimodal fusion. Being regarded as a biological plausible model of the human brain, the DBM is also becoming a popular instrument to
Drusen Characterization with Multimodal Imaging
Spaide, Richard F.; Curcio, Christine A.
2010-01-01
Summary Multimodal imaging findings and histological demonstration of soft drusen, cuticular drusen, and subretinal drusenoid deposits provided information used to develop a model explaining their imaging characteristics. Purpose To characterize the known appearance of cuticular drusen, subretinal drusenoid deposits (reticular pseudodrusen), and soft drusen as revealed by multimodal fundus imaging; to create an explanatory model that accounts for these observations. Methods Reported color, fluorescein angiographic, autofluorescence, and spectral domain optical coherence tomography (SD-OCT) images of patients with cuticular drusen, soft drusen, and subretinal drusenoid deposits were reviewed, as were actual images from affected eyes. Representative histological sections were examined. The geometry, location, and imaging characteristics of these lesions were evaluated. A hypothesis based on the Beer-Lambert Law of light absorption was generated to fit these observations. Results Cuticular drusen appear as numerous uniform round yellow-white punctate accumulations under the retinal pigment epithelium (RPE). Soft drusen are larger yellow-white dome-shaped mounds of deposit under the RPE. Subretinal drusenoid deposits are polymorphous light-grey interconnected accumulations above the RPE. Based on the model, both cuticular and soft drusen appear yellow due to the removal of shorter wavelength light by a double pass through the RPE. Subretinal drusenoid deposits, which are located on the RPE, are not subjected to short wavelength attenuation and therefore are more prominent when viewed with blue light. The location and morphology of extracellular material in relationship to the RPE, and associated changes to RPE morphology and pigmentation, appeared to be primary determinants of druse appearance in different imaging modalities. Conclusion Although cuticular drusen, subretinal drusenoid deposits, and soft drusen are composed of common components, they are distinguishable
Multimodality imaging of the postoperative shoulder
Energy Technology Data Exchange (ETDEWEB)
Woertler, Klaus [Technische Universitaet Muenchen, Department of Radiology, Munich (Germany)
2007-12-15
Multimodality imaging of the postoperative shoulder includes radiography, magnetic resonance (MR) imaging, MR arthrography, computed tomography (CT), CT arthrography, and ultrasound. Target-oriented evaluation of the postoperative shoulder necessitates familiarity with surgical techniques, their typical complications and sources of failure, knowledge of normal and abnormal postoperative findings, awareness of the advantages and weaknesses with the different radiologic techniques, and clinical information on current symptoms and function. This article reviews the most commonly used surgical procedures for treatment of anterior glenohumeral instability, lesions of the labral-bicipital complex, subacromial impingement, and rotator cuff lesions and highlights the significance of imaging findings with a view to detection of recurrent lesions and postoperative complications in a multimodality approach. (orig.)
Investigations of image fusion
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D
Multimodality image registration with software: state-of-the-art
International Nuclear Information System (INIS)
Slomka, Piotr J.; Baum, Richard P.
2009-01-01
Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)
Alparone, Luciano; Baronti, Stefano; Garzelli, Andrea
2015-01-01
A synthesis of more than ten years of experience, Remote Sensing Image Fusion covers methods specifically designed for remote sensing imagery. The authors supply a comprehensive classification system and rigorous mathematical description of advanced and state-of-the-art methods for pansharpening of multispectral images, fusion of hyperspectral and panchromatic images, and fusion of data from heterogeneous sensors such as optical and synthetic aperture radar (SAR) images and integration of thermal and visible/near-infrared images. They also explore new trends of signal/image processing, such as
An atlas-based multimodal registration method for 2D images with discrepancy structures.
Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng
2018-06-04
An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.
Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI
Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant
2014-03-01
Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons
Cardiac imaging. A multimodality approach
International Nuclear Information System (INIS)
Thelen, Manfred; Erbel, Raimund; Kreitner, Karl-Friedrich; Barkhausen, Joerg
2009-01-01
An excellent atlas on modern diagnostic imaging of the heart Written by an interdisciplinary team of experts, Cardiac Imaging: A Multimodality Approach features an in-depth introduction to all current imaging modalities for the diagnostic assessment of the heart as well as a clinical overview of cardiac diseases and main indications for cardiac imaging. With a particular emphasis on CT and MRI, the first part of the atlas also covers conventional radiography, echocardiography, angiography and nuclear medicine imaging. Leading specialists demonstrate the latest advances in the field, and compare the strengths and weaknesses of each modality. The book's second part features clinical chapters on heart defects, endocarditis, coronary heart disease, cardiomyopathies, myocarditis, cardiac tumors, pericardial diseases, pulmonary vascular diseases, and diseases of the thoracic aorta. The authors address anatomy, pathophysiology, and clinical features, and evaluate the various diagnostic options. Key features: - Highly regarded experts in cardiology and radiology off er image-based teaching of the latest techniques - Readers learn how to decide which modality to use for which indication - Visually highlighted tables and essential points allow for easy navigation through the text - More than 600 outstanding images show up-to-date technology and current imaging protocols Cardiac Imaging: A Multimodality Approach is a must-have desk reference for cardiologists and radiologists in practice, as well as a study guide for residents in both fields. It will also appeal to cardiac surgeons, general practitioners, and medical physicists with a special interest in imaging of the heart. (orig.)
Multimodality image registration with software: state-of-the-art
Energy Technology Data Exchange (ETDEWEB)
Slomka, Piotr J. [Cedars-Sinai Medical Center, AIM Program/Department of Imaging, Los Angeles, CA (United States); University of California, David Geffen School of Medicine, Los Angeles, CA (United States); Baum, Richard P. [Center for PET, Department of Nuclear Medicine, Bad Berka (Germany)
2009-03-15
Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)
[Image fusion in medical radiology].
Burger, C
1996-07-20
Image fusion supports the correlation between images of two or more studies of the same organ. First, the effect of differing geometries during image acquisitions, such as a head tilt, is compensated for. As a consequence, congruent images can easily be obtained. Instead of merely putting them side by side in a static manner and burdening the radiologist with the whole correlation task, image fusion supports him with interactive visualization techniques. This is especially worthwhile for small lesions as they can be more precisely located. Image fusion is feasible today. Easy and robust techniques are readily available, and furthermore DICOM, a rapidly evolving data exchange standard, diminishes the once severe compatibility problems for image data originating from systems of different manufacturers. However, the current solutions for image fusion are not yet established enough for a high throughput of fusion studies. Thus, for the time being image fusion is most appropriately confined to clinical research studies.
Discrimination of skin diseases using the multimodal imaging approach
Vogler, N.; Heuke, S.; Akimov, D.; Latka, I.; Kluschke, F.; Röwert-Huber, H.-J.; Lademann, J.; Dietzek, B.; Popp, J.
2012-06-01
Optical microspectroscopic tools reveal great potential for dermatologic diagnostics in the clinical day-to-day routine. To enhance the diagnostic value of individual nonlinear optical imaging modalities such as coherent anti-Stokes Raman scattering (CARS), second harmonic generation (SHG) or two-photon excited fluorescence (TPF), the approach of multimodal imaging has recently been developed. Here, we present an application of nonlinear optical multimodal imaging with Raman-scattering microscopy to study sizable human-tissue cross-sections. The samples investigated contain both healthy tissue and various skin tumors. This contribution details the rich information content, which can be obtained from the multimodal approach: While CARS microscopy, which - in contrast to spontaneous Raman-scattering microscopy - is not hampered by single-photon excited fluorescence, is used to monitor the lipid and protein distribution in the samples, SHG imaging selectively highlights the distribution of collagen structures within the tissue. This is due to the fact, that SHG is only generated in structures which lack inversion geometry. Finally, TPF reveals the distribution of autofluorophores in tissue. The combination of these techniques, i.e. multimodal imaging, allows for recording chemical images of large area samples and is - as this contribution will highlight - of high clinically diagnostic value.
Multimodality and Ambient Intelligence
Nijholt, Antinus; Verhaegh, W.; Aarts, E.; Korst, J.
2004-01-01
In this chapter we discuss multimodal interface technology. We present eexamples of multimodal interfaces and show problems and opportunities. Fusion of modalities is discussed and some roadmap discussions on research in multimodality are summarized. This chapter also discusses future developments
Quantitative multimodality imaging in cancer research and therapy.
Yankeelov, Thomas E; Abramson, Richard G; Quarles, C Chad
2014-11-01
Advances in hardware and software have enabled the realization of clinically feasible, quantitative multimodality imaging of tissue pathophysiology. Earlier efforts relating to multimodality imaging of cancer have focused on the integration of anatomical and functional characteristics, such as PET-CT and single-photon emission CT (SPECT-CT), whereas more-recent advances and applications have involved the integration of multiple quantitative, functional measurements (for example, multiple PET tracers, varied MRI contrast mechanisms, and PET-MRI), thereby providing a more-comprehensive characterization of the tumour phenotype. The enormous amount of complementary quantitative data generated by such studies is beginning to offer unique insights into opportunities to optimize care for individual patients. Although important technical optimization and improved biological interpretation of multimodality imaging findings are needed, this approach can already be applied informatively in clinical trials of cancer therapeutics using existing tools. These concepts are discussed herein.
Multispectral analysis of multimodal images
Energy Technology Data Exchange (ETDEWEB)
Kvinnsland, Yngve; Brekke, Njaal (Dept. of Surgical Sciences, Univ. of Bergen, Bergen (Norway)); Taxt, Torfinn M.; Gruener, Renate (Dept. of Biomedicine, Univ. of Bergen, Bergen (Norway))
2009-02-15
An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.
Cardiac imaging. A multimodality approach
Energy Technology Data Exchange (ETDEWEB)
Thelen, Manfred [Johannes Gutenberg University Hospital, Mainz (Germany); Erbel, Raimund [University Hospital Essen (Germany). Dept. of Cardiology; Kreitner, Karl-Friedrich [Johannes Gutenberg University Hospital, Mainz (Germany). Clinic and Polyclinic for Diagnostic and Interventional Radiology; Barkhausen, Joerg (eds.) [University Hospital Schleswig-Holstein, Luebeck (Germany). Dept. of Radiology and Nuclear Medicine
2009-07-01
An excellent atlas on modern diagnostic imaging of the heart Written by an interdisciplinary team of experts, Cardiac Imaging: A Multimodality Approach features an in-depth introduction to all current imaging modalities for the diagnostic assessment of the heart as well as a clinical overview of cardiac diseases and main indications for cardiac imaging. With a particular emphasis on CT and MRI, the first part of the atlas also covers conventional radiography, echocardiography, angiography and nuclear medicine imaging. Leading specialists demonstrate the latest advances in the field, and compare the strengths and weaknesses of each modality. The book's second part features clinical chapters on heart defects, endocarditis, coronary heart disease, cardiomyopathies, myocarditis, cardiac tumors, pericardial diseases, pulmonary vascular diseases, and diseases of the thoracic aorta. The authors address anatomy, pathophysiology, and clinical features, and evaluate the various diagnostic options. Key features: - Highly regarded experts in cardiology and radiology off er image-based teaching of the latest techniques - Readers learn how to decide which modality to use for which indication - Visually highlighted tables and essential points allow for easy navigation through the text - More than 600 outstanding images show up-to-date technology and current imaging protocols Cardiac Imaging: A Multimodality Approach is a must-have desk reference for cardiologists and radiologists in practice, as well as a study guide for residents in both fields. It will also appeal to cardiac surgeons, general practitioners, and medical physicists with a special interest in imaging of the heart. (orig.)
Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal
Directory of Open Access Journals (Sweden)
Han Zhiyan
2016-01-01
Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.
Vermeer, J.E.M.; van Munster, E.B.; Vischer, N.O.; Gadella, T.
2004-01-01
Multimode fluorescence resonance energy transfer (FRET) microscopy was applied to study the plasma membrane organization using different lipidated green fluorescent protein (GFP)-fusion proteins co-expressed in cowpea protoplasts. Cyan fluorescent protein (CFP) was fused to the hyper variable region
Yang, Jie; Yin, Yingying; Zhang, Zuping; Long, Jun; Dong, Jian; Zhang, Yuqun; Xu, Zhi; Li, Lei; Liu, Jie; Yuan, Yonggui
2018-02-05
Major depressive disorder (MDD) is characterized by dysregulation of distributed structural and functional networks. It is now recognized that structural and functional networks are related at multiple temporal scales. The recent emergence of multimodal fusion methods has made it possible to comprehensively and systematically investigate brain networks and thereby provide essential information for influencing disease diagnosis and prognosis. However, such investigations are hampered by the inconsistent dimensionality features between structural and functional networks. Thus, a semi-multimodal fusion hierarchical feature reduction framework is proposed. Feature reduction is a vital procedure in classification that can be used to eliminate irrelevant and redundant information and thereby improve the accuracy of disease diagnosis. Our proposed framework primarily consists of two steps. The first step considers the connection distances in both structural and functional networks between MDD and healthy control (HC) groups. By adding a constraint based on sparsity regularization, the second step fully utilizes the inter-relationship between the two modalities. However, in contrast to conventional multi-modality multi-task methods, the structural networks were considered to play only a subsidiary role in feature reduction and were not included in the following classification. The proposed method achieved a classification accuracy, specificity, sensitivity, and area under the curve of 84.91%, 88.6%, 81.29%, and 0.91, respectively. Moreover, the frontal-limbic system contributed the most to disease diagnosis. Importantly, by taking full advantage of the complementary information from multimodal neuroimaging data, the selected consensus connections may be highly reliable biomarkers of MDD. Copyright © 2017 Elsevier B.V. All rights reserved.
A Multimodal Search Engine for Medical Imaging Studies.
Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos
2017-02-01
The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.
International Nuclear Information System (INIS)
Peng, Matthew Jian-qiao; Ju Xiangyang; Khambay, Balvinder S.; Ayoub, Ashraf F.; Chen, Chin-Tu; Bai Bo
2012-01-01
Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18 F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.
The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense
Quek, Francis
2004-12-01
The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to "whole gesture" recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.
Quantitative multi-modal NDT data analysis
International Nuclear Information System (INIS)
Heideklang, René; Shokouhi, Parisa
2014-01-01
A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity
Multimodal functional network connectivity: an EEG-fMRI fusion in network space.
Directory of Open Access Journals (Sweden)
Xu Lei
Full Text Available EEG and fMRI recordings measure the functional activity of multiple coherent networks distributed in the cerebral cortex. Identifying network interaction from the complementary neuroelectric and hemodynamic signals may help to explain the complex relationships between different brain regions. In this paper, multimodal functional network connectivity (mFNC is proposed for the fusion of EEG and fMRI in network space. First, functional networks (FNs are extracted using spatial independent component analysis (ICA in each modality separately. Then the interactions among FNs in each modality are explored by Granger causality analysis (GCA. Finally, fMRI FNs are matched to EEG FNs in the spatial domain using network-based source imaging (NESOI. Investigations of both synthetic and real data demonstrate that mFNC has the potential to reveal the underlying neural networks of each modality separately and in their combination. With mFNC, comprehensive relationships among FNs might be unveiled for the deep exploration of neural activities and metabolic responses in a specific task or neurological state.
Registration of deformed multimodality medical images
International Nuclear Information System (INIS)
Moshfeghi, M.; Naidich, D.
1989-01-01
The registration and combination of images from different modalities have several potential applications, such as functional and anatomic studies, 3D radiation treatment planning, surgical planning, and retrospective studies. Image registration algorithms should correct for any local deformations caused by respiration, heart beat, imaging device distortions, and so forth. This paper reports on an elastic matching technique for registering deformed multimodality images. Correspondences between contours in the two images are used to stretch the deformed image toward its goal image. This process is repeated a number of times, with decreasing image stiffness. As the iterations continue, the stretched image better approximates its goal image
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense
Directory of Open Access Journals (Sweden)
Francis Quek
2004-09-01
Full Text Available The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to “whole gesture” recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.
Fusion Imaging for Procedural Guidance.
Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J
2018-05-01
The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Directory of Open Access Journals (Sweden)
Gayathri Rajagopal
2015-01-01
Full Text Available This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox
Directory of Open Access Journals (Sweden)
Andre Santos Ribeiro
2015-07-01
Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter
Improving treatment planning accuracy through multimodality imaging
International Nuclear Information System (INIS)
Sailer, Scott L.; Rosenman, Julian G.; Soltys, Mitchel; Cullip, Tim J.; Chen, Jun
1996-01-01
Purpose: In clinical practice, physicians are constantly comparing multiple images taken at various times during the patient's treatment course. One goal of such a comparison is to accurately define the gross tumor volume (GTV). The introduction of three-dimensional treatment planning has greatly enhanced the ability to define the GTV, but there are times when the GTV is not visible on the treatment-planning computed tomography (CT) scan. We have modified our treatment-planning software to allow for interactive display of multiple, registered images that enhance the physician's ability to accurately determine the GTV. Methods and Materials: Images are registered using interactive tools developed at the University of North Carolina at Chapel Hill (UNC). Automated methods are also available. Images registered with the treatment-planning CT scan are digitized from film. After a physician has approved the registration, the registered images are made available to the treatment-planning software. Structures and volumes of interest are contoured on all images. In the beam's eye view, wire loop representations of these structures can be visualized from all image types simultaneously. Each registered image can be seamlessly viewed during the treatment-planning process, and all contours from all image types can be seen on any registered image. A beam may, therefore, be designed based on any contour. Results: Nineteen patients have been planned and treated using multimodality imaging from November 1993 through August 1994. All registered images were digitized from film, and many were from outside institutions. Brain has been the most common site (12), but the techniques of registration and image display have also been used for the thorax (4), abdomen (2), and extremity (1). The registered image has been an magnetic resonance (MR) scan in 15 cases and a diagnostic CT scan in 5 cases. In one case, sequential MRs, one before treatment and another after 30 Gy, were used to plan
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
A multimodal image sensor system for identifying water stress in grapevines
Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong
2012-11-01
Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.
Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Directory of Open Access Journals (Sweden)
Houda Benaliouche
2014-01-01
Full Text Available This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065
Energy Technology Data Exchange (ETDEWEB)
Kapur, T. [Brigham & Women’s Hospital (United States)
2016-06-15
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guided neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504
International Nuclear Information System (INIS)
Kapur, T.
2016-01-01
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guided neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504
Multimodal imaging of lung cancer and its microenvironment (Conference Presentation)
Hariri, Lida P.; Niederst, Matthew J.; Mulvey, Hillary; Adams, David C.; Hu, Haichuan; Chico Calero, Isabel; Szabari, Margit V.; Vakoc, Benjamin J.; Hasan, Tayyaba; Bouma, Brett E.; Engelman, Jeffrey A.; Suter, Melissa J.
2016-03-01
Despite significant advances in targeted therapies for lung cancer, nearly all patients develop drug resistance within 6-12 months and prognosis remains poor. Developing drug resistance is a progressive process that involves tumor cells and their microenvironment. We hypothesize that microenvironment factors alter tumor growth and response to targeted therapy. We conducted in vitro studies in human EGFR-mutant lung carcinoma cells, and demonstrated that factors secreted from lung fibroblasts results in increased tumor cell survival during targeted therapy with EGFR inhibitor, gefitinib. We also demonstrated that increased environment stiffness results in increased tumor survival during gefitinib therapy. In order to test our hypothesis in vivo, we developed a multimodal optical imaging protocol for preclinical intravital imaging in mouse models to assess tumor and its microenvironment over time. We have successfully conducted multimodal imaging of dorsal skinfold chamber (DSC) window mice implanted with GFP-labeled human EGFR mutant lung carcinoma cells and visualized changes in tumor development and microenvironment facets over time. Multimodal imaging included structural OCT to assess tumor viability and necrosis, polarization-sensitive OCT to measure tissue birefringence for collagen/fibroblast detection, and Doppler OCT to assess tumor vasculature. Confocal imaging was also performed for high-resolution visualization of EGFR-mutant lung cancer cells labeled with GFP, and was coregistered with OCT. Our results demonstrated that stromal support and vascular growth are essential to tumor progression. Multimodal imaging is a useful tool to assess tumor and its microenvironment over time.
Feature-based Alignment of Volumetric Multi-modal Images
Toews, Matthew; Zöllei, Lilla; Wells, William M.
2014-01-01
This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955
Noncontact Sleep Study by Multi-Modal Sensor Fusion
Directory of Open Access Journals (Sweden)
Ku-young Chung
2017-07-01
Full Text Available Polysomnography (PSG is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner.
Multimodal imaging in health, disease, and man-made disasters
International Nuclear Information System (INIS)
Papineni, Rao V.L.
2012-01-01
Significant advances in the fields of molecular and functional imaging are rapidly emerging as potential advance research tools in health, Disease and drug discovery. Notable are the approaches utilizing multi-modal imaging strategies in preclinical studies that are becoming extremely useful in assessing the efficacy of the novel target molecules. This talk will focus on our efforts in bringing the multimodality to preclinical research with Optical, X-ray, and noninvasive nuclear imaging. The concepts and methods in molecular imaging to support drug targeting and drug discovery will be discussed along with a focus on its utilization in radiation induced changes in the bone physiology. Also, will discuss how such approaches can be employed in future as a biodosimetry for radiation disasters or in radiation threat. (author)
Multimodal system for the planning and guidance of bronchoscopy
Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick
2015-03-01
Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.
Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration
Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis
2009-01-01
Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657
Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James
2018-02-01
Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.
NaGdF4:Nd3+/Yb3+ Nanoparticles as Multimodal Imaging Agents
Pedraza, Francisco; Rightsell, Chris; Kumar, Ga; Giuliani, Jason; Monton, Car; Sardar, Dhiraj
Medical imaging is a fundamental tool used for the diagnosis of numerous ailments. Each imaging modality has unique advantages; however, they possess intrinsic limitations. Some of which include low spatial resolution, sensitivity, penetration depth, and radiation damage. To circumvent this problem, the combination of imaging modalities, or multimodal imaging, has been proposed, such as Near Infrared Fluorescence imaging (NIRF) and Magnetic Resonance Imaging (MRI). Combining individual advantages, specificity and selectivity of NIRF with the deep penetration and high spatial resolution of MRI, it is possible to circumvent their shortcomings for a more robust imaging technique. In addition, both imaging modalities are very safe and minimally invasive. Fluorescent nanoparticles, such as NaGdF4:Nd3 +/Yb3 +, are excellent candidates for NIRF/MRI multimodal imaging. The dopants, Nd and Yb, absorb and emit within the biological window; where near infrared light is less attenuated by soft tissue. This results in less tissue damage and deeper tissue penetration making it a viable candidate in biological imaging. In addition, the inclusion of Gd results in paramagnetic properties, allowing their use as contrast agents in multimodal imaging. The work presented will include crystallographic results, as well as full optical and magnetic characterization to determine the nanoparticle's viability in multimodal imaging.
Fusion in computer vision understanding complex visual content
Ionescu, Bogdan; Piatrik, Tomas
2014-01-01
This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo
Multimodality image analysis work station
International Nuclear Information System (INIS)
Ratib, O.; Huang, H.K.
1989-01-01
The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans
Verma, Gyanendra K; Tiwary, Uma Shanker
2014-11-15
The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.
Recommendations on nuclear and multimodality imaging in IE and CIED infections.
Erba, Paola Anna; Lancellotti, Patrizio; Vilacosta, Isidre; Gaemperli, Oliver; Rouzet, Francois; Hacker, Marcus; Signore, Alberto; Slart, Riemer H J A; Habib, Gilbert
2018-05-24
In the latest update of the European Society of Cardiology (ESC) guidelines for the management of infective endocarditis (IE), imaging is positioned at the centre of the diagnostic work-up so that an early and accurate diagnosis can be reached. Besides echocardiography, contrast-enhanced CT (ce-CT), radiolabelled leucocyte (white blood cell, WBC) SPECT/CT and [ 18 F]FDG PET/CT are included as diagnostic tools in the diagnostic flow chart for IE. Following the clinical guidelines that provided a straightforward message on the role of multimodality imaging, we believe that it is highly relevant to produce specific recommendations on nuclear multimodality imaging in IE and cardiac implantable electronic device infections. In these procedural recommendations we therefore describe in detail the technical and practical aspects of WBC SPECT/CT and [ 18 F]FDG PET/CT, including ce-CT acquisition protocols. We also discuss the advantages and limitations of each procedure, specific pitfalls when interpreting images, and the most important results from the literature, and also provide recommendations on the appropriate use of multimodality imaging.
Multimodal interaction in image and video applications
Sappa, Angel D
2013-01-01
Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...
Quantitative image fusion in infrared radiometry
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
Superparamagnetic nanoparticles for enhanced magnetic resonance and multimodal imaging
Sikma, Elise Ann Schultz
Magnetic resonance imaging (MRI) is a powerful tool for noninvasive tomographic imaging of biological systems with high spatial and temporal resolution. Superparamagnetic (SPM) nanoparticles have emerged as highly effective MR contrast agents due to their biocompatibility, ease of surface modification and magnetic properties. Conventional nanoparticle contrast agents suffer from difficult synthetic reproducibility, polydisperse sizes and weak magnetism. Numerous synthetic techniques and nanoparticle formulations have been developed to overcome these barriers. However, there are still major limitations in the development of new nanoparticle-based probes for MR and multimodal imaging including low signal amplification and absence of biochemical reporters. To address these issues, a set of multimodal (T2/optical) and dual contrast (T1/T2) nanoparticle probes has been developed. Their unique magnetic properties and imaging capabilities were thoroughly explored. An enzyme-activatable contrast agent is currently being developed as an innovative means for early in vivo detection of cancer at the cellular level. Multimodal probes function by combining the strengths of multiple imaging techniques into a single agent. Co-registration of data obtained by multiple imaging modalities validates the data, enhancing its quality and reliability. A series of T2/optical probes were successfully synthesized by attachment of a fluorescent dye to the surface of different types of nanoparticles. The multimodal nanoparticles generated sufficient MR and fluorescence signal to image transplanted islets in vivo. Dual contrast T1/T2 imaging probes were designed to overcome disadvantages inherent in the individual T1 and T2 components. A class of T1/T2 agents was developed consisting of a gadolinium (III) complex (DTPA chelate or DO3A macrocycle) conjugated to a biocompatible silica-coated metal oxide nanoparticle through a disulfide linker. The disulfide linker has the ability to be reduced
Image recovery from defocused 2D fluorescent images in multimodal digital holographic microscopy.
Quan, Xiangyu; Matoba, Osamu; Awatsuji, Yasuhiro
2017-05-01
A technique of three-dimensional (3D) intensity retrieval from defocused, two-dimensional (2D) fluorescent images in the multimodal digital holographic microscopy (DHM) is proposed. In the multimodal DHM, 3D phase and 2D fluorescence distributions are obtained simultaneously by an integrated system of an off-axis DHM and a conventional epifluorescence microscopy, respectively. This gives us more information of the target; however, defocused fluorescent images are observed due to the short depth of field. In this Letter, we propose a method to recover the defocused images based on the phase compensation and backpropagation from the defocused plane to the focused plane using the distance information that is obtained from a 3D phase distribution. By applying Zernike polynomial phase correction, we brought back the fluorescence intensity to the focused imaging planes. The experimental demonstration using fluorescent beads is presented, and the expected applications are suggested.
You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue
2018-01-01
To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.
ROSTICHER , C.; Viana , Bruno; Fortin , M.-A.; Lagueux , J.; Faucher , L.; Chanéac , Corinne
2016-01-01
International audience; Persistent luminescence and magnetic properties of Gd2O2S: Eu 3+ , Ti 4+ , Mg 2+ nanoparticles have been studied to attest the relevance of such nanoparticles as nanoprobes for multimodal imaging. The development of new imaging tools is required to improve the quality of medical images and then to diagnose some disorders as quickly as possible in order to ensure more effective treatment. Multimodal imaging agents here developed combine the high resolution abilities of ...
Multitemporal Very High Resolution From Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest
Mou, L.; Zhu, X.; Vakalopoulou, M.; Karantzalos, K.; Paragios, N.; Saux, Le B.; Moser, G.; Tuia, D.
2017-01-01
In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset,
Multimodal nanoparticle imaging agents: design and applications
Burke, Benjamin P.; Cawthorne, Christopher; Archibald, Stephen J.
2017-10-01
Molecular imaging, where the location of molecules or nanoscale constructs can be tracked in the body to report on disease or biochemical processes, is rapidly expanding to include combined modality or multimodal imaging. No single imaging technique can offer the optimum combination of properties (e.g. resolution, sensitivity, cost, availability). The rapid technological advances in hardware to scan patients, and software to process and fuse images, are pushing the boundaries of novel medical imaging approaches, and hand-in-hand with this is the requirement for advanced and specific multimodal imaging agents. These agents can be detected using a selection from radioisotope, magnetic resonance and optical imaging, among others. Nanoparticles offer great scope in this area as they lend themselves, via facile modification procedures, to act as multifunctional constructs. They have relevance as therapeutics and drug delivery agents that can be tracked by molecular imaging techniques with the particular development of applications in optically guided surgery and as radiosensitizers. There has been a huge amount of research work to produce nanoconstructs for imaging, and the parameters for successful clinical translation and validation of therapeutic applications are now becoming much better understood. It is an exciting time of progress for these agents as their potential is closer to being realized with translation into the clinic. The coming 5-10 years will be critical, as we will see if the predicted improvement in clinical outcomes becomes a reality. Some of the latest advances in combination modality agents are selected and the progression pathway to clinical trials analysed. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.
Multifocus Image Fusion in Q-Shift DTCWT Domain Using Various Fusion Rules
Directory of Open Access Journals (Sweden)
Yingzhong Tian
2016-01-01
Full Text Available Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT. Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS and the Sum Modified Laplacian (SML. Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.
Directory of Open Access Journals (Sweden)
William M. Payne
2017-01-01
Full Text Available Surgical resection remains the most promising treatment strategy for many types of cancer. Residual malignant tissue after surgery, a consequence in part due to positive margins, contributes to high mortality and disease recurrence. In this study, multimodal contrast agents for integrated preoperative magnetic resonance imaging (MRI and intraoperative fluorescence image-guided surgery (FIGS are developed. Self-assembled multimodal imaging nanoparticles (SAMINs were developed as a mixed micelle formulation using amphiphilic HA polymers functionalized with either GdDTPA for T1 contrast-enhanced MRI or Cy7.5, a near infrared fluorophore. To evaluate the relationship between MR and fluorescence signal from SAMINs, we employed simulated surgical phantoms that are routinely used to evaluate the depth at which near infrared (NIR imaging agents can be detected by FIGS. Finally, imaging agent efficacy was evaluated in a human breast tumor xenograft model in nude mice, which demonstrated contrast in both fluorescence and magnetic resonance imaging.
MO-DE-202-04: Multimodality Image-Guided Surgery and Intervention: For the Rest of Us
Energy Technology Data Exchange (ETDEWEB)
Shekhar, R. [Children’s National Health System (United States)
2016-06-15
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guided neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504
MO-DE-202-04: Multimodality Image-Guided Surgery and Intervention: For the Rest of Us
International Nuclear Information System (INIS)
Shekhar, R.
2016-01-01
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guided neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504
Color Multifocus Image Fusion Using Empirical Mode Decomposition
Directory of Open Access Journals (Sweden)
S. Savić
2013-11-01
Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.
Magnetic Iron Oxide Nanoparticles for Multimodal Imaging and Therapy of Cancer
Directory of Open Access Journals (Sweden)
In-Kyu Park
2013-07-01
Full Text Available Superparamagnetic iron oxide nanoparticles (SPION have emerged as an MRI contrast agent for tumor imaging due to their efficacy and safety. Their utility has been proven in clinical applications with a series of marketed SPION-based contrast agents. Extensive research has been performed to study various strategies that could improve SPION by tailoring the surface chemistry and by applying additional therapeutic functionality. Research into the dual-modal contrast uses of SPION has developed because these applications can save time and effort by reducing the number of imaging sessions. In addition to multimodal strategies, efforts have been made to develop multifunctional nanoparticles that carry both diagnostic and therapeutic cargos specifically for cancer. This review provides an overview of recent advances in multimodality imaging agents and focuses on iron oxide based nanoparticles and their theranostic applications for cancer. Furthermore, we discuss the physiochemical properties and compare different synthesis methods of SPION for the development of multimodal contrast agents.
Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging
International Nuclear Information System (INIS)
Joshi, Bishnu P.; Wang, Thomas D.
2010-01-01
Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research
Multimodal targeted high relaxivity thermosensitive liposome for in vivo imaging
Kuijten, Maayke M. P.; Hannah Degeling, M.; Chen, John W.; Wojtkiewicz, Gregory; Waterman, Peter; Weissleder, Ralph; Azzi, Jamil; Nicolay, Klaas; Tannous, Bakhos A.
2015-11-01
Liposomes are spherical, self-closed structures formed by lipid bilayers that can encapsulate drugs and/or imaging agents in their hydrophilic core or within their membrane moiety, making them suitable delivery vehicles. We have synthesized a new liposome containing gadolinium-DOTA lipid bilayer, as a targeting multimodal molecular imaging agent for magnetic resonance and optical imaging. We showed that this liposome has a much higher molar relaxivities r1 and r2 compared to a more conventional liposome containing gadolinium-DTPA-BSA lipid. By incorporating both gadolinium and rhodamine in the lipid bilayer as well as biotin on its surface, we used this agent for multimodal imaging and targeting of tumors through the strong biotin-streptavidin interaction. Since this new liposome is thermosensitive, it can be used for ultrasound-mediated drug delivery at specific sites, such as tumors, and can be guided by magnetic resonance imaging.
Image fusion for dynamic contrast enhanced magnetic resonance imaging
Directory of Open Access Journals (Sweden)
Leach Martin O
2004-10-01
Full Text Available Abstract Background Multivariate imaging techniques such as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI have been shown to provide valuable information for medical diagnosis. Even though these techniques provide new information, integrating and evaluating the much wider range of information is a challenging task for the human observer. This task may be assisted with the use of image fusion algorithms. Methods In this paper, image fusion based on Kernel Principal Component Analysis (KPCA is proposed for the first time. It is demonstrated that a priori knowledge about the data domain can be easily incorporated into the parametrisation of the KPCA, leading to task-oriented visualisations of the multivariate data. The results of the fusion process are compared with those of the well-known and established standard linear Principal Component Analysis (PCA by means of temporal sequences of 3D MRI volumes from six patients who took part in a breast cancer screening study. Results The PCA and KPCA algorithms are able to integrate information from a sequence of MRI volumes into informative gray value or colour images. By incorporating a priori knowledge, the fusion process can be automated and optimised in order to visualise suspicious lesions with high contrast to normal tissue. Conclusion Our machine learning based image fusion approach maps the full signal space of a temporal DCE-MRI sequence to a single meaningful visualisation with good tissue/lesion contrast and thus supports the radiologist during manual image evaluation.
Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.
Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie
2016-07-01
Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.
Image fusion tool: Validation by phantom measurements
International Nuclear Information System (INIS)
Zander, A.; Geworski, L.; Richter, M.; Ivancevic, V.; Munz, D.L.; Muehler, M.; Ditt, H.
2002-01-01
Aim: Validation of a new image fusion tool with regard to handling, application in a clinical environment and fusion precision under different acquisition and registration settings. Methods: The image fusion tool investigated allows fusion of imaging modalities such as PET, CT, MRI. In order to investigate fusion precision, PET and MRI measurements were performed using a cylinder and a body contour-shaped phantom. The cylinder phantom (diameter and length 20 cm each) contained spheres (10 to 40 mm in diameter) which represented 'cold' or 'hot' lesions in PET measurements. The body contour-shaped phantom was equipped with a heart model containing two 'cold' lesions. Measurements were done with and without four external markers placed on the phantoms. The markers were made of plexiglass (2 cm diameter and 1 cm thickness) and contained a Ga-Ge-68 core for PET and Vitamin E for MRI measurements. Comparison of fusion results with and without markers was done visually and by computer assistance. This algorithm was applied to the different fusion parameters and phantoms. Results: Image fusion of PET and MRI data without external markers yielded a measured error of 0 resulting in a shift at the matrix border of 1.5 mm. Conclusion: The image fusion tool investigated allows a precise fusion of PET and MRI data with a translation error acceptable for clinical use. The error is further minimized by using external markers, especially in the case of missing anatomical orientation. Using PET the registration error depends almost only on the low resolution of the data
Multimodal Biometric System Based on the Recognition of Face and Both Irises
Directory of Open Access Journals (Sweden)
Yeong Gon Kim
2012-09-01
Full Text Available The performance of unimodal biometric systems (based on a single modality such as face or fingerprint has to contend with various problems, such as illumination variation, skin condition and environmental conditions, and device variations. Therefore, multimodal biometric systems have been used to overcome the limitations of unimodal biometrics and provide high accuracy recognition. In this paper, we propose a new multimodal biometric system based on score level fusion of face and both irises' recognition. Our study has the following novel features. First, the device proposed acquires images of the face and both irises simultaneously. The proposed device consists of a face camera, two iris cameras, near-infrared illuminators and cold mirrors. Second, fast and accurate iris detection is based on two circular edge detections, which are accomplished in the iris image on the basis of the size of the iris detected in the face image. Third, the combined accuracy is enhanced by combining each score for the face and both irises using a support vector machine. The experimental results show that the equal error rate for the proposed method is 0.131%, which is lower than that of face or iris recognition and other fusion methods.
Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T
2001-11-01
The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.
Multimodality Imaging of Heart Valve Disease
International Nuclear Information System (INIS)
Rajani, Ronak; Khattar, Rajdeep; Chiribiri, Amedeo; Victor, Kelly; Chambers, John
2014-01-01
Unidentified heart valve disease is associated with a significant morbidity and mortality. It has therefore become important to accurately identify, assess and monitor patients with this condition in order that appropriate and timely intervention can occur. Although echocardiography has emerged as the predominant imaging modality for this purpose, recent advances in cardiac magnetic resonance and cardiac computed tomography indicate that they may have an important contribution to make. The current review describes the assessment of regurgitant and stenotic heart valves by multimodality imaging (echocardiography, cardiac computed tomography and cardiac magnetic resonance) and discusses their relative strengths and weaknesses
Multimodality Imaging of Heart Valve Disease
Energy Technology Data Exchange (ETDEWEB)
Rajani, Ronak, E-mail: Dr.R.Rajani@gmail.com [Department of Cardiology, St. Thomas’ Hospital, London (United Kingdom); Khattar, Rajdeep [Department of Cardiology, Royal Brompton Hospital, London (United Kingdom); Chiribiri, Amedeo [Divisions of Imaging Sciences, The Rayne Institute, St. Thomas' Hospital, London (United Kingdom); Victor, Kelly; Chambers, John [Department of Cardiology, St. Thomas’ Hospital, London (United Kingdom)
2014-09-15
Unidentified heart valve disease is associated with a significant morbidity and mortality. It has therefore become important to accurately identify, assess and monitor patients with this condition in order that appropriate and timely intervention can occur. Although echocardiography has emerged as the predominant imaging modality for this purpose, recent advances in cardiac magnetic resonance and cardiac computed tomography indicate that they may have an important contribution to make. The current review describes the assessment of regurgitant and stenotic heart valves by multimodality imaging (echocardiography, cardiac computed tomography and cardiac magnetic resonance) and discusses their relative strengths and weaknesses.
Combined Sparsifying Transforms for Compressive Image Fusion
Directory of Open Access Journals (Sweden)
ZHAO, L.
2013-11-01
Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.
Multi-sensor image fusion and its applications
Blum, Rick S
2005-01-01
Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,
Imaging arterial cells, atherosclerosis, and restenosis by multimodal nonlinear optical microscopy
Wang, Han-Wei; Simianu, Vlad; Locker, Matthew J.; Sturek, Michael; Cheng, Ji-Xin
2008-02-01
By integrating sum-frequency generation (SFG), and two-photon excitation fluorescence (TPEF) on a coherent anti-Stokes Raman scattering (CARS) microscope platform, multimodal nonlinear optical (NLO) imaging of arteries and atherosclerotic lesions was demonstrated. CARS signals arising from CH II-rich membranes allowed visualization of endothelial cells and smooth muscle cells in a carotid artery. Additionally, CARS microscopy allowed vibrational imaging of elastin and collagen fibrils which are rich in CH II bonds in their cross-linking residues. The extracellular matrix organization was further confirmed by TPEF signals arising from elastin's autofluorescence and SFG signals arising from collagen fibrils' non-centrosymmetric structure. The system is capable of identifying different atherosclerotic lesion stages with sub-cellular resolution. The stages of atherosclerosis, such as macrophage infiltration, lipid-laden foam cell accumulation, extracellular lipid distribution, fibrous tissue deposition, plaque establishment, and formation of other complicated lesions could be viewed by our multimodal CARS microscope. Collagen percentages in the region adjacent to coronary artery stents were resolved. High correlation between NLO and histology imaging evidenced the validity of the NLO imaging. The capability of imaging significant components of an arterial wall and distinctive stages of atherosclerosis in a label-free manner suggests the potential application of multimodal nonlinear optical microscopy to monitor the onset and progression of arterial diseases.
Joint Multi-Focus Fusion and Bayer ImageRestoration
Institute of Scientific and Technical Information of China (English)
Ling Guo; Bin Yang; Chao Yang
2015-01-01
In this paper, a joint multifocus image fusion and Bayer pattern image restoration algorithm for raw images of single-sensor colorimaging devices is proposed. Different from traditional fusion schemes, the raw Bayer pattern images are fused before colorrestoration. Therefore, the Bayer image restoration operation is only performed one time. Thus, the proposed algorithm is moreefficient than traditional fusion schemes. In detail, a clarity measurement of Bayer pattern image is defined for raw Bayer patternimages, and the fusion operator is performed on superpixels which provide powerful grouping cues of local image feature. Theraw images are merged with refined weight map to get the fused Bayer pattern image, which is restored by the demosaicingalgorithm to get the full resolution color image. Experimental results demonstrate that the proposed algorithm can obtain betterfused results with more natural appearance and fewer artifacts than the traditional algorithms.
A Novel Technique for Prealignment in Multimodality Medical Image Registration
Directory of Open Access Journals (Sweden)
Wu Zhou
2014-01-01
Full Text Available Image pair is often aligned initially based on a rigid or affine transformation before a deformable registration method is applied in medical image registration. Inappropriate initial registration may compromise the registration speed or impede the convergence of the optimization algorithm. In this work, a novel technique was proposed for prealignment in both monomodality and multimodality image registration based on statistical correlation of gradient information. A simple and robust algorithm was proposed to determine the rotational differences between two images based on orientation histogram matching accumulated from local orientation of each pixel without any feature extraction. Experimental results showed that it was effective to acquire the orientation angle between two unregistered images with advantages over the existed method based on edge-map in multimodalities. Applying the orientation detection into the registration of CT/MR, T1/T2 MRI, and monomadality images with respect to rigid and nonrigid deformation improved the chances of finding the global optimization of the registration and reduced the search space of optimization.
Clinical assessment of SPECT/CT co-registration image fusion
International Nuclear Information System (INIS)
Zhou Wen; Luan Zhaosheng; Peng Yong
2004-01-01
Objective: Study the methodology of the SPECT/CT co-registration image fusion, and Assessment the Clinical application value. Method: 172 patients who underwent SPECT/CT image fusion during 2001-2003 were studied, 119 men, 53 women. 51 patients underwent 18FDG image +CT, 26 patients underwent 99m Tc-RBC Liver pool image +CT, 43 patients underwent 99mTc-MDP Bone image +CT, 18 patients underwent 99m Tc-MAA Lung perfusion image +CT. The machine is Millium VG SPECT of GE Company. All patients have been taken three steps image: X-ray survey, X-ray transmission and nuclear emission image (Including planer imaging, SPECT or 18 F-FDG of dual head camera) without changing the position of the patients. We reconstruct the emission image with X-ray map and do reconstruction, 18FDG with COSEM and 99mTc with OSEM. Then combine the transmission image and the reconstructed emission image. We use different process parameters in deferent image methods. The accurate rate of SPECT/CT image fusion were statistics, and compare their accurate with that of single nuclear emission image. Results: The nuclear image which have been reconstructed by X-ray attenuation and OSEM are apparent better than pre-reconstructed. The post-reconstructed emission images have no scatter lines around the organs. The outline between different issues is more clear than before. The validity of All post-reconstructed images is better than pre-reconstructed. SPECT/CT image fusion make localization have worthy bases. 138 patients, the accuracy of SPECT/CT image fusion is 91.3% (126/138), whereas 60(88.2%) were found through SPECT/CT image fusion, There are significant difference between them(P 99m Tc- RBC-SPECT +CT image fusion, but 21 of them were inspected by emission image. In BONE 99m Tc -MDP-SPECT +CT image fusion, 4 patients' removed bone(1-6 months after surgery) and their relay with normal bone had activity, their morphologic and density in CT were different from normal bones. 11 of 20 patients who could
Comparison of DCT, SVD and BFOA based multimodal biometric watermarking system
Directory of Open Access Journals (Sweden)
S. Anu H. Nair
2015-12-01
Full Text Available Digital image watermarking is a major domain for hiding the biometric information, in which the watermark data are made to be concealed inside a host image imposing imperceptible change in the picture. Due to the advance in digital image watermarking, the majority of research aims to make a reliable improvement in robustness to prevent the attack. The reversible invisible watermarking scheme is used for fingerprint and iris multimodal biometric system. A novel approach is used for fusing different biometric modalities. Individual unique modalities of fingerprint and iris biometric are extracted and fused using different fusion techniques. The performance of different fusion techniques is evaluated and the Discrete Wavelet Transform fusion method is identified as the best. Then the best fused biometric template is watermarked into a cover image. The various watermarking techniques such as the Discrete Cosine Transform (DCT, Singular Value Decomposition (SVD and Bacterial Foraging Optimization Algorithm (BFOA are implemented to the fused biometric feature image. Performance of watermarking systems is compared using different metrics. It is found that the watermarked images are found robust over different attacks and they are able to reverse the biometric template for Bacterial Foraging Optimization Algorithm (BFOA watermarking technique.
Multimodality imaging spectrum of complications of horseshoe kidney
Directory of Open Access Journals (Sweden)
Hardik U Shah
2017-01-01
Full Text Available Horseshoe kidney is the most common congenital renal fusion anomaly with an incidence of 1 in 400–600 individuals. The most common type is fusion at the lower poles seen in greater than 90% of the cases, with the rest depicting fusion at the upper poles, resulting in an inverted horseshoe kidney. Embryologically, there are two theories hypothesizing the genesis of horseshoe kidney – mechanical fusion theory and teratogenic event theory. As an entity, horseshoe kidney is an association of two anatomic anomalies, namely, ectopia and malrotation. It is also associated with other anomalies including vascular, calyceal, and ureteral anomalies. Horseshoe kidney is prone to a number of complications due to its abnormal position as well as due to associated vascular and ureteral anomalies. Complications associated with horseshoe kidney include pelviureteric junction obstruction, renal stones, infection, tumors, and trauma. It can also be associated with abnormalities of cardiovascular, central nervous, musculoskeletal and genitourinary systems, as well as chromosomal abnormalities. Conventional imaging modalities (plain films, intravenous urogram as well as advanced cross-sectional imaging modalities (ultrasound, computed tomography, and magnetic resonance imaging play an important role in the evaluation of horseshoe kidney. This article briefly describes the embryology and anatomy of the horseshoe kidney, enumerates appropriate imaging modalities used for its evaluation, and reviews cross-sectional imaging features of associated complications.
Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance.
Directory of Open Access Journals (Sweden)
Christopher A Mela
Full Text Available We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b the first wearable system offering both large FOV and microscopic imaging simultaneously,
Model-based satellite image fusion
DEFF Research Database (Denmark)
Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg
2008-01-01
A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....
Robust QRS peak detection by multimodal information fusion of ECG and blood pressure signals.
Ding, Quan; Bai, Yong; Erol, Yusuf Bugra; Salas-Boni, Rebeca; Zhang, Xiaorong; Hu, Xiao
2016-11-01
QRS peak detection is a challenging problem when ECG signal is corrupted. However, additional physiological signals may also provide information about the QRS position. In this study, we focus on a unique benchmark provided by PhysioNet/Computing in Cardiology Challenge 2014 and Physiological Measurement focus issue: robust detection of heart beats in multimodal data, which aimed to explore robust methods for QRS detection in multimodal physiological signals. A dataset of 200 training and 210 testing records are used, where the testing records are hidden for evaluating the performance only. An information fusion framework for robust QRS detection is proposed by leveraging existing ECG and ABP analysis tools and combining heart beats derived from different sources. Results show that our approach achieves an overall accuracy of 90.94% and 88.66% on the training and testing datasets, respectively. Furthermore, we observe expected performance at each step of the proposed approach, as an evidence of the effectiveness of our approach. Discussion on the limitations of our approach is also provided.
Energy Technology Data Exchange (ETDEWEB)
Moche, M.; Busse, H.; Dannenberg, C.; Schulz, T.; Schmidt, F.; Kahn, T. [Universitaetsklinikum Leipzig (Germany). Klinik und Poliklinik fuer Diagnostische Radiologie; Schmitgen, A. [GMD Forschungszentrum Informationstechnik GmbH-FIT, Sankt Augustin (Germany); Trantakis, C.; Winkler, D. [Klinik und Poliklinik fuer Neurochirurgie, Universitaetsklinikum Leipzig (Germany)
2001-11-01
The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion - requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions. (orig.) [German] Ziel dieser Arbeit waren die Realisierung und klinische Bewertung einer Bildfusion praeoperativer MRT- und fMRT-Bilder mit intraoperativen Datensaetzen eines interventionellen MRT-Systems am Beispiel neurochirurgischer Eingriffe. Ein vertikal offenes 0,5-T-MRT-System wurde mit einem erweiterten Navigationssystem ausgestattet, welches eine Integration zusaetzlicher Bildinformationen (Hochfeld-MRT, fMRT, CT) in die intraoperativ akquirierten Datensaetze erlaubt. Diese fusionierten Bilddaten wurden zur Interventionsplanung und multimodalen Navigation verwendet. Bisher wurde das System bei insgesamt 70 neurochirurgischen Eingriffen eingesetzt, davon 13
Assessment of fusion operators for medical imaging: application to MR images fusion
International Nuclear Information System (INIS)
Barra, V.; Boire, J.Y.
2000-01-01
We propose in the article to assess the results provided by several fusion operators in the case of T 1 - and T 2 -weighted magnetic resonance images fusion of the brain. This assessment deals with an expert visual inspection of the results and with a numerical analysis of some comparison measures found in the literature. The aim of this assessment is to find the 'best' operator according to the clinical study. This method is here applied to the quantification of brain tissue volumes on a brain phantom, and allows to select a fusion operator in any clinical study where several information is available. (authors)
Multimodal imaging in cerebral gliomas and its neuropathological correlation
Energy Technology Data Exchange (ETDEWEB)
Gempt, Jens, E-mail: jens.gempt@lrz.tum.de [Neurochirurgische Klinik und Poliklinik, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 München (Germany); Soehngen, Eric [Abteilung für Neuroradiologie, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 München (Germany); Abteilung für Neuropathologie des Instituts für Allgemeine Pathologie und Pathologische Anatomie, Technische Universität München, Ismaninger Str. 22, 81675 München (Germany); Förster, Stefan [Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 München (Germany); Ryang, Yu-Mi [Neurochirurgische Klinik und Poliklinik, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 München (Germany); Schlegel, Jürgen [Abteilung für Neuropathologie des Instituts für Allgemeine Pathologie und Pathologische Anatomie, Technische Universität München, Ismaninger Str. 22, 81675 München (Germany); and others
2014-05-15
Introduction: Concerning the preoperative clinical diagnostic work-up of glioma patients, tumor heterogeneity challenges the oncological therapy. The current study assesses the performance of a multimodal imaging approach to differentiate between areas in malignant gliomas and to investigate the extent to which such a combinatorial imaging approach might predict the underlying histology. Methods: Prior to surgical resection, patients harboring intracranial gliomas underwent MRIs (MR-S, PWI) and {sup 18}F-FET-PETs. Intratumoral and peritumoral biopsy targets were defined, by MRI only, by FET-PET only, and by MRI and FET-PET combined, and biopsied prior to surgical resection and which then received separate histopathological examinations. Results: In total, 38 tissue samples were acquired (seven glioblastomas, one anaplastic astrocytoma, one anaplastic oligoastrocytoma, one diffuse astrocytoma, and one oligoastrocytoma) and underwent histopathological analysis. The highest mean values of Mib1 and CD31 were found in the target point “T’ defined by MRI and FET-PET combined. A significant correlation between NAA/Cr and PET tracer uptake (−0.845, p < 0.05) as well as Cho/Cr ratio and cell density (0.742, p < 0.05) and NAA/Cr ratio and MIB-1 (−0761, p < 0.05) was disclosed for this target point, though not for target points defined by MRI and FET-PET alone. Conclusion: Multimodal-imaging-guided stereotactic biopsy correlated more with histological malignancy indices, such as cell density and MIB-1 labeling, than targets that were based solely on the highest amino acid uptake or contrast enhancement on MRI. The results of our study indicate that a combined PET-MR multimodal imaging approach bears potential benefits in detecting glioma heterogeneity.
Visible and NIR image fusion using weight-map-guided Laplacian ...
Indian Academy of Sciences (India)
Ashish V Vanmali
fusion perspective, instead of the conventional haze imaging model. The proposed ... Image dehazing; Laplacian–Gaussian pyramid; multi-resolution fusion; visible–NIR image fusion; weight map. 1. .... Tan's [8] work is based on two assumptions: first, images ... responding colour image, since NIR can penetrate through.
A Robust Multimodal Bio metric Authentication Scheme with Voice and Face Recognition
International Nuclear Information System (INIS)
Kasban, H.
2017-01-01
This paper proposes a multimodal biometric scheme for human authentication based on fusion of voice and face recognition. For voice recognition, three categories of features (statistical coefficients, cepstral coefficients and voice timbre) are used and compared. The voice identification modality is carried out using Gaussian Mixture Model (GMM). For face recognition, three recognition methods (Eigenface, Linear Discriminate Analysis (LDA), and Gabor filter) are used and compared. The combination of voice and face biometrics systems into a single multimodal biometrics system is performed using features fusion and scores fusion. This study shows that the best results are obtained using all the features (cepstral coefficients, statistical coefficients and voice timbre features) for voice recognition, LDA face recognition method and scores fusion for the multimodal biometrics system
Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L
1997-04-01
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
F-18 Labeled Diabody-Luciferase Fusion Proteins for Optical-ImmunoPET
Energy Technology Data Exchange (ETDEWEB)
Wu, Anna M. [Univ. of California, Los Angeles, CA (United States)
2013-01-18
The goal of the proposed work is to develop novel dual-labeled molecular imaging probes for multimodality imaging. Based on small, engineered antibodies called diabodies, these probes will be radioactively tagged with Fluorine-18 for PET imaging, and fused to luciferases for optical (bioluminescence) detection. Performance will be evaluated and validated using a prototype integrated optical-PET imaging system, OPET. Multimodality probes for optical-PET imaging will be based on diabodies that are dually labeled with 18F for PET detection and fused to luciferases for optical imaging. 1) Two sets of fusion proteins will be built, targeting the cell surface markers CEA or HER2. Coelenterazine-based luciferases and variant forms will be evaluated in combination with native substrate and analogs, in order to obtain two distinct probes recognizing different targets with different spectral signatures. 2) Diabody-luciferase fusion proteins will be labeled with 18F using amine reactive [18F]-SFB produced using a novel microwave-assisted, one-pot method. 3) Sitespecific, chemoselective radiolabeling methods will be devised, to reduce the chance that radiolabeling will inactivate either the target-binding properties or the bioluminescence properties of the diabody-luciferase fusion proteins. 4) Combined optical and PET imaging of these dual modality probes will be evaluated and validated in vitro and in vivo using a prototype integrated optical-PET imaging system, OPET. Each imaging modality has its strengths and weaknesses. Development and use of dual modality probes allows optical imaging to benefit from the localization and quantitation offered by the PET mode, and enhances the PET imaging by enabling simultaneous detection of more than one probe.
An FPGA-based heterogeneous image fusion system design method
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Multi-modal brain imaging software for guiding invasive treatment of epilepsy
Ossenblok, P.P.W.; Marien, S.; Meesters, S.P.L.; Florack, L.M.J.; Hofman, P.; Schijns, O.E.M.G.; Colon, A.
2017-01-01
Purpose: The surgical treatment of patients with complex epilepsies is changing more and more from open, invasive surgery towards minimally invasive, image guided treatment. Multi-modal brain imaging procedures are developed to delineate preoperatively the region of the brain which is responsible
A multimodal parallel architecture: A cognitive framework for multimodal interactions.
Cohn, Neil
2016-01-01
Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.
Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.
Okuno, Masanari; Hamaguchi, Hiro-o
2010-12-15
We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.
Image fusion via nonlocal sparse K-SVD dictionary learning.
Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang
2016-03-01
Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.
Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation
Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang
2015-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829
Image fusion in x-ray differential phase-contrast imaging
Haas, W.; Polyanskaya, M.; Bayer, F.; Gödel, K.; Hofmann, H.; Rieger, J.; Ritter, A.; Weber, T.; Wucherer, L.; Durst, J.; Michel, T.; Anton, G.; Hornegger, J.
2012-02-01
Phase-contrast imaging is a novel modality in the field of medical X-ray imaging. The pioneer method is the grating-based interferometry which has no special requirements to the X-ray source and object size. Furthermore, it provides three different types of information of an investigated object simultaneously - absorption, differential phase-contrast and dark-field images. Differential phase-contrast and dark-field images represent a completely new information which has not yet been investigated and studied in context of medical imaging. In order to introduce phase-contrast imaging as a new modality into medical environment the resulting information about the object has to be correctly interpreted. The three output images reflect different properties of the same object the main challenge is to combine and visualize these data in such a way that it diminish the information explosion and reduce the complexity of its interpretation. This paper presents an intuitive image fusion approach which allows to operate with grating-based phase-contrast images. It combines information of the three different images and provides a single image. The approach is implemented in a fusion framework which is aimed to support physicians in study and analysis. The framework provides the user with an intuitive graphical user interface allowing to control the fusion process. The example given in this work shows the functionality of the proposed method and the great potential of phase-contrast imaging in medical practice.
TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging
International Nuclear Information System (INIS)
Cai, J; Mageras, G; Pan, T
2014-01-01
Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique
Multiscale infrared and visible image fusion using gradient domain guided image filtering
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Multimodal label-free microscopy
Directory of Open Access Journals (Sweden)
Nicolas Pavillon
2014-09-01
Full Text Available This paper reviews the different multimodal applications based on a large extent of label-free imaging modalities, ranging from linear to nonlinear optics, while also including spectroscopic measurements. We put specific emphasis on multimodal measurements going across the usual boundaries between imaging modalities, whereas most multimodal platforms combine techniques based on similar light interactions or similar hardware implementations. In this review, we limit the scope to focus on applications for biology such as live cells or tissues, since by their nature of being alive or fragile, we are often not free to take liberties with the image acquisition times and are forced to gather the maximum amount of information possible at one time. For such samples, imaging by a given label-free method usually presents a challenge in obtaining sufficient optical signal or is limited in terms of the types of observable targets. Multimodal imaging is then particularly attractive for these samples in order to maximize the amount of measured information. While multimodal imaging is always useful in the sense of acquiring additional information from additional modes, at times it is possible to attain information that could not be discovered using any single mode alone, which is the essence of the progress that is possible using a multimodal approach.
A framework of region-based dynamic image fusion
Institute of Scientific and Technical Information of China (English)
WANG Zhong-hua; QIN Zheng; LIU Yu
2007-01-01
A new framework of region-based dynamic image fusion is proposed. First, the technique of target detection is applied to dynamic images (image sequences) to segment images into different targets and background regions. Then different fusion rules are employed in different regions so that the target information is preserved as much as possible. In addition, steerable non-separable wavelet frame transform is used in the process of multi-resolution analysis, so the system achieves favorable characters of orientation and invariant shift. Compared with other image fusion methods, experimental results showed that the proposed method has better capabilities of target recognition and preserves clear background information.
Anato-metabolic fusion of PET, CT and MRI images
International Nuclear Information System (INIS)
Przetak, C.; Baum, R.P.; Niesen, A.; Slomka, P.; Proeschild, A.; Leonhardi, J.
2000-01-01
The fusion of cross-sectional images - especially in oncology - appears to be a very helpful tool to improve the diagnostic and therapeutic accuracy. Though many advantages exist, image fusion is applied routinely only in a few hospitals. To introduce image fusion as a common procedure, technical and logistical conditions have to be fulfilled which are related to long term archiving of digital data, data transfer and improvement of the available software in terms of usefulness and documentation. The accuracy of coregistration and the quality of image fusion has to be validated by further controlled studies. (orig.) [de
Fusion of colour and monochromatic images with edge emphasis
Directory of Open Access Journals (Sweden)
Rade M. Pavlović
2014-02-01
Full Text Available We propose a novel method to fuse true colour images with monochromatic non-visible range images that seeks to encode important structural information from monochromatic images efficiently but also preserve the natural appearance of the available true chromacity information. We utilise the β colour opponency channel of the lαβ colour as the domain to fuse information from the monochromatic input into the colour input by the way of robust grayscale fusion. This is followed by an effective gradient structure visualisation step that enhances the visibility of monochromatic information in the final colour fused image. Images fused using this method preserve their natural appearance and chromacity better than conventional methods while at the same time clearly encode structural information from the monochormatic input. This is demonstrated on a number of well-known true colour fusion examples and confirmed by the results of subjective trials on the data from several colour fusion scenarios. Introduction The goal of image fusion can be broadly defined as: the representation of visual information contained in a number of input images into a single fused image without distortion or loss of information. In practice, however, a representation of all available information from multiple inputs in a single image is almost impossible and fusion is generally a data reduction task. One of the sensors usually provides a true colour image that by definition has all of its data dimensions already populated by the spatial and chromatic information. Fusing such images with information from monochromatic inputs in a conventional manner can severely affect natural appearance of the fused image. This is a difficult problem and partly the reason why colour fusion received only a fraction of the attention than better behaved grayscale fusion even long after colour sensors became widespread. Fusion method Humans tend to see colours as contrasts between opponent
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Multimodal Personal Verification Using Likelihood Ratio for the Match Score Fusion
Directory of Open Access Journals (Sweden)
Long Binh Tran
2017-01-01
Full Text Available In this paper, the authors present a novel personal verification system based on the likelihood ratio test for fusion of match scores from multiple biometric matchers (face, fingerprint, hand shape, and palm print. In the proposed system, multimodal features are extracted by Zernike Moment (ZM. After matching, the match scores from multiple biometric matchers are fused based on the likelihood ratio test. A finite Gaussian mixture model (GMM is used for estimating the genuine and impostor densities of match scores for personal verification. Our approach is also compared to some different famous approaches such as the support vector machine and the sum rule with min-max. The experimental results have confirmed that the proposed system can achieve excellent identification performance for its higher level in accuracy than different famous approaches and thus can be utilized for more application related to person verification.
International Nuclear Information System (INIS)
Wang, Yan; Zhou, Jiliu; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Shen, Dinggang; Wu, Xi; Lalush, David S; Lin, Weili
2016-01-01
Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. (paper)
In vivo mapping of vascular inflammation using multimodal imaging.
Directory of Open Access Journals (Sweden)
Benjamin R Jarrett
2010-10-01
Full Text Available Plaque vulnerability to rupture has emerged as a critical correlate to risk of adverse coronary events but there is as yet no clinical method to assess plaque stability in vivo. In the search to identify biomarkers of vulnerable plaques an association has been found between macrophages and plaque stability--the density and pattern of macrophage localization in lesions is indicative of probability to rupture. In very unstable plaques, macrophages are found in high densities and concentrated in the plaque shoulders. Therefore, the ability to map macrophages in plaques could allow noninvasive assessment of plaque stability. We use a multimodality imaging approach to noninvasively map the distribution of macrophages in vivo. The use of multiple modalities allows us to combine the complementary strengths of each modality to better visualize features of interest. Our combined use of Positron Emission Tomography and Magnetic Resonance Imaging (PET/MRI allows high sensitivity PET screening to identify putative lesions in a whole body view, and high resolution MRI for detailed mapping of biomarker expression in the lesions.Macromolecular and nanoparticle contrast agents targeted to macrophages were developed and tested in three different mouse and rat models of atherosclerosis in which inflamed vascular plaques form spontaneously and/or are induced by injury. For multimodal detection, the probes were designed to contain gadolinium (T1 MRI or iron oxide (T2 MRI, and Cu-64 (PET. PET imaging was utilized to identify regions of macrophage accumulation; these regions were further probed by MRI to visualize macrophage distribution at high resolution. In both PET and MR images the probes enhanced contrast at sites of vascular inflammation, but not in normal vessel walls. MRI was able to identify discrete sites of inflammation that were blurred together at the low resolution of PET. Macrophage content in the lesions was confirmed by histology.The multimodal
Remote sensing image fusion in the context of Digital Earth
International Nuclear Information System (INIS)
Pohl, C
2014-01-01
The increase in the number of operational Earth observation satellites gives remote sensing image fusion a new boost. As a powerful tool to integrate images from different sensors it enables multi-scale, multi-temporal and multi-source information extraction. Image fusion aims at providing results that cannot be obtained from a single data source alone. Instead it enables feature and information mining of higher reliability and availability. The process required to prepare remote sensing images for image fusion comprises most of the necessary steps to feed the database of Digital Earth. The virtual representation of the planet uses data and information that is referenced and corrected to suit interpretation and decision-making. The same pre-requisite is valid for image fusion, the outcome of which can directly flow into a geographical information system. The assessment and description of the quality of the results remains critical. Depending on the application and information to be extracted from multi-source images different approaches are necessary. This paper describes the process of image fusion based on a fusion and classification experiment, explains the necessary quality measures involved and shows with this example which criteria have to be considered if the results of image fusion are going to be used in Digital Earth
PCANet-Based Structural Representation for Nonrigid Multimodal Medical Image Registration
Directory of Open Access Journals (Sweden)
Xingxing Zhu
2018-05-01
Full Text Available Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND, normalized mutual information (NMI, Weber local descriptor (WLD, and the sum of squared differences on entropy images (ESSD, the proposed method provides better registration performance in terms of target registration error (TRE and subjective human vision.
Multimodal Image Alignment via Linear Mapping between Feature Modalities.
Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James
2017-01-01
We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.
Biometric image enhancement using decision rule based image fusion techniques
Sagayee, G. Mary Amirtha; Arumugam, S.
2010-02-01
Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.
Extended depth of field integral imaging using multi-focus fusion
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework
Directory of Open Access Journals (Sweden)
Guanqiu Qi
2017-10-01
Full Text Available Image fusion is widely used in different areas and can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. Medical image fusion, as an important image fusion application, can extract the details of multiple images from different imaging modalities and combine them into an image that contains complete and non-redundant information for increasing the accuracy of medical diagnosis and assessment. The quality of the fused image directly affects medical diagnosis and assessment. However, existing solutions have some drawbacks in contrast, sharpness, brightness, blur and details. This paper proposes an integrated dictionary-learning and entropy-based medical image-fusion framework that consists of three steps. First, the input image information is decomposed into low-frequency and high-frequency components by using a Gaussian filter. Second, low-frequency components are fused by weighted average algorithm and high-frequency components are fused by the dictionary-learning based algorithm. In the dictionary-learning process of high-frequency components, an entropy-based algorithm is used for informative blocks selection. Third, the fused low-frequency and high-frequency components are combined to obtain the final fusion results. The results and analyses of comparative experiments demonstrate that the proposed medical image fusion framework has better performance than existing solutions.
Fast single image dehazing based on image fusion
Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian
2015-01-01
Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.
Image fusion techniques in permanent seed implantation
Directory of Open Access Journals (Sweden)
Alfredo Polo
2010-10-01
Full Text Available Over the last twenty years major software and hardware developments in brachytherapy treatment planning, intraoperative navigation and dose delivery have been made. Image-guided brachytherapy has emerged as the ultimate conformal radiation therapy, allowing precise dose deposition on small volumes under direct image visualization. In thisprocess imaging plays a central role and novel imaging techniques are being developed (PET, MRI-MRS and power Doppler US imaging are among them, creating a new paradigm (dose-guided brachytherapy, where imaging is used to map the exact coordinates of the tumour cells, and to guide applicator insertion to the correct position. Each of these modalities has limitations providing all of the physical and geometric information required for the brachytherapy workflow.Therefore, image fusion can be used as a solution in order to take full advantage of the information from each modality in treatment planning, intraoperative navigation, dose delivery, verification and follow-up of interstitial irradiation.Image fusion, understood as the visualization of any morphological volume (i.e. US, CT, MRI together with an additional second morpholo gical volume (i.e. CT, MRI or functional dataset (functional MRI, SPECT, PET, is a well known method for treatment planning, verification and follow-up of interstitial irradiation. The term image fusion is used when multiple patient image datasets are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality taken at different moments (multi-temporalapproach, or by combining information from multiple modalities. Quality means that the fused images should provide additional information to the brachythe rapy process (diagnosis and staging, treatment planning, intraoperative imaging, treatment delivery and follow-up that cannot be obtained in other ways. In this review I will focus on the role of
Multimodal location estimation of videos and images
Friedland, Gerald
2015-01-01
This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. · Discusses localization of multimedia data; · Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); · Covers Data-Driven as well as Semantic Location Estimation.
[Research progress of multi-model medical image fusion and recognition].
Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian
2013-10-01
Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Pulmonary function-morphologic relationships assessed by SPECT-CT fusion images
International Nuclear Information System (INIS)
Suga, Kazuyoshi
2012-01-01
Pulmonary single photon emission computed tomography-computed tomography (SPECT-CT) fusion images provide objective and comprehensive assessment of pulmonary function and morphology relationships at cross-sectional lungs. This article reviewed the noteworthy findings of lung pathophysiology in wide-spectral lung disorders, which have been revealed on SPECT-CT fusion images in 8 years of experience. The fusion images confirmed the fundamental pathophysiologic appearance of lung low CT attenuation caused by airway obstruction-induced hypoxic vasoconstriction and that caused by direct pulmonary arterial obstruction as in acute pulmonary thromboembolism (PTE). The fusion images showed better correlation of lung perfusion distribution with lung CT attenuation changes at lung mosaic CT attenuation (MCA) compared with regional ventilation in the wide-spectral lung disorders, indicating that lung heterogeneous perfusion distribution may be a dominant mechanism of MCA on CT. SPECT-CT angiography fusion images revealed occasional dissociation between lung perfusion defects and intravascular clots in acute PTE, indicating the importance of assessment of actual effect of intravascular colts on peripheral lung perfusion. Perfusion SPECT-CT fusion images revealed the characteristic and preferential location of pulmonary infarction in acute PTE. The fusion images showed occasional unexpected perfusion defects in normal lung areas on CT in chronic obstructive pulmonary diseases and interstitial lung diseases, indicating the ability of perfusion SPECT superior to CT for detection of mild lesions in these disorders. The fusion images showed frequent ''steal phenomenon''-induced perfusion defects extending to the surrounding normal lung of arteriovenous fistulas and those at normal lungs on CT in hepatopulmonary syndrome. Comprehensive assessment of lung function-CT morphology on fusion images will lead to more profound understanding of lung pathophysiology in wide-spectral lung
Multimodal imaging of bone metastases: From preclinical to clinical applications
Directory of Open Access Journals (Sweden)
Stephan Ellmann
2015-10-01
Full Text Available Metastases to the skeletal system are commonly observed in cancer patients, highly affecting the patients' quality of life. Imaging plays a major role in detection, follow-up, and molecular characterisation of metastatic disease. Thus, imaging techniques have been optimised and combined in a multimodal and multiparametric manner for assessment of complementary aspects in osseous metastases. This review summarises both application of the most relevant imaging techniques for bone metastasis in preclinical models and the clinical setting.
Radar image and data fusion for natural hazards characterisation
Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong
2010-01-01
Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.
Three-dimensional imaging of lumbar spinal fusions
International Nuclear Information System (INIS)
Chafetz, N.; Hunter, J.C.; Cann, C.E.; Morris, J.M.; Ax, L.; Catterling, K.F.
1986-01-01
Using a Cemax 1000 three-dimensional (3D) imaging computer/workstation, the author evaluated 15 patients with lumbar spinal fusions (four with pseudarthrosis). Both axial images with sagittal and coronal reformations and 3D images were obtained. The diagnoses (spinal stenosis and psuedarthrosis) were changed in four patients, confirmed in six patients, and unchanged in five patients with the addition of the 3D images. The ''cut-away'' 3D images proved particularly helpful for evaluation of central and lateral spinal stenosis, whereas the ''external'' 3D images were most useful for evaluation of the integrity of the fusion. Additionally, orthopedic surgeons found 3D images superior for both surgical planning and explaining pathology to patients
A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion
Directory of Open Access Journals (Sweden)
Zhiqin Zhu
2017-02-01
Full Text Available In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different groups based on geometric similarities. The key information of each image-patch group is extracted by principle component analysis (PCA to build dictionary. According to the constructed dictionary, image patches are converted to sparse coefficients by simultaneous orthogonal matching pursuit (SOMP algorithm for representing the source multi-focus images. At last the sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Due to the limitation of microscope, the fluorescence image cannot be fully focused. The proposed multi-focus image fusion solution is applied to fluorescence imaging area for generating all-in-focus images. The comparison experimentation results confirm the feasibility and effectiveness of the proposed multi-focus image fusion solution.
Design and Applications of a Multimodality Image Data Warehouse Framework
Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885
Advanced Contrast Agents for Multimodal Biomedical Imaging Based on Nanotechnology.
Calle, Daniel; Ballesteros, Paloma; Cerdán, Sebastián
2018-01-01
Clinical imaging modalities have reached a prominent role in medical diagnosis and patient management in the last decades. Different image methodologies as Positron Emission Tomography, Single Photon Emission Tomography, X-Rays, or Magnetic Resonance Imaging are in continuous evolution to satisfy the increasing demands of current medical diagnosis. Progress in these methodologies has been favored by the parallel development of increasingly more powerful contrast agents. These are molecules that enhance the intrinsic contrast of the images in the tissues where they accumulate, revealing noninvasively the presence of characteristic molecular targets or differential physiopathological microenvironments. The contrast agent field is currently moving to improve the performance of these molecules by incorporating the advantages that modern nanotechnology offers. These include, mainly, the possibilities to combine imaging and therapeutic capabilities over the same theranostic platform or improve the targeting efficiency in vivo by molecular engineering of the nanostructures. In this review, we provide an introduction to multimodal imaging methods in biomedicine, the sub-nanometric imaging agents previously used and the development of advanced multimodal and theranostic imaging agents based in nanotechnology. We conclude providing some illustrative examples from our own laboratories, including recent progress in theranostic formulations of magnetoliposomes containing ω-3 poly-unsaturated fatty acids to treat inflammatory diseases, or the use of stealth liposomes engineered with a pH-sensitive nanovalve to release their cargo specifically in the acidic extracellular pH microenvironment of tumors.
INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES
Directory of Open Access Journals (Sweden)
H. Shen
2012-08-01
Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.
WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals
Energy Technology Data Exchange (ETDEWEB)
Tsui, B. [Johns Hopkins University (United States)
2016-06-15
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffers from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly
WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals
International Nuclear Information System (INIS)
Tsui, B.
2016-01-01
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffers from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly
Prada, Francesco; Del Bene, Massimiliano; Moiraghi, Alessandro; Casali, Cecilia; Legnani, Federico Giuseppe; Saladino, Andrea; Perin, Alessandro; Vetrano, Ignazio Gaspare; Mattei, Luca; Richetta, Carla; Saini, Marco; DiMeco, Francesco
2015-01-01
The main goal in meningioma surgery is to achieve complete tumor removal, when possible, while improving or preserving patient neurological functions. Intraoperative imaging guidance is one fundamental tool for such achievement. In this regard, intra-operative ultrasound (ioUS) is a reliable solution to obtain real-time information during surgery and it has been applied in many different aspect of neurosurgery. In the last years, different ioUS modalities have been described: B-mode, Fusion Imaging with pre-operative acquired MRI, Doppler, contrast enhanced ultrasound (CEUS), and elastosonography. In this paper, we present our US based multimodal approach in meningioma surgery. We describe all the most relevant ioUS modalities and their intraoperative application to obtain precise and specific information regarding the lesion for a tailored approach in meningioma surgery. For each modality, we perform a review of the literature accompanied by a pictorial essay based on our routinely use of ioUS for meningioma resection.
Multimodality molecular imaging - from target description to clinical studies
International Nuclear Information System (INIS)
Schober, O.; Rahbar, K.; Riemann, B.
2009-01-01
This highlight lecture was presented at the closing session of the Annual Congress of the European Association of Nuclear Medicine (EANM) in Munich on 15 October 2008. The Congress was a great success: there were more than 4,000 participants, and 1,597 abstracts were submitted. Of these, 1,387 were accepted for oral or poster presentation, with a rejection rate of 14%. In this article a choice was made from 100 of the 500 lectures which received the highest scores by the scientific review panel. This article outlines the major findings and trends at the EANM 2008, and is only a brief summary of the large number of outstanding abstracts presented. Among the great number of oral and poster presentations covering nearly all fields of nuclear medicine some headlines have to be defined highlighting the development of nuclear medicine in the 21st century. This review focuses on the increasing impact of molecular and multimodality imaging in the field of nuclear medicine. In addition, the question may be asked as to whether the whole spectrum of nuclear medicine is nothing other than molecular imaging and therapy. Furthermore, molecular imaging will and has to go ahead to multimodality imaging. In view of this background the review was structured according to the single steps of molecular imaging, i.e. from target description to clinical studies. The following topics are addressed: targets, radiochemistry and radiopharmacy, devices and computer science, animals and preclinical evaluations, and patients and clinical evaluations. (orig.)
Alternate method for to realize image fusion
International Nuclear Information System (INIS)
Vargas, L.; Hernandez, F.; Fernandez, R.
2005-01-01
At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)
Evaluation of Multimodal Imaging Biomarkers of Prostate Cancer
2016-11-01
relationship prostate cancer growth, androgen receptor (AR) levels, hypoxia, and translocator protein (TSPO) levels. As described in the statement of work... bladder uptake) that enable robust detection of small prostate cancers . In contrast, high background and variable uptake of FDHT and FMISO confounded the...Award Number: W81XWH-12-1-0245 TITLE: Evaluation of Multimodal Imaging Biomarkers of Prostate Cancer PRINCIPAL INVESTIGATOR: Christopher Chad
Multimodality Registration without a Dedicated Multimodality Scanner
Directory of Open Access Journals (Sweden)
Bradley J. Beattie
2007-03-01
Full Text Available Multimodality scanners that allow the acquisition of both functional and structural image sets on a single system have recently become available for animal research use. Although the resultant registered functional/structural image sets can greatly enhance the interpretability of the functional data, the cost of multimodality systems can be prohibitive, and they are often limited to two modalities, which generally do not include magnetic resonance imaging. Using a thin plastic wrap to immobilize and fix a mouse or other small animal atop a removable bed, we are able to calculate registrations between all combinations of four different small animal imaging scanners (positron emission tomography, single-photon emission computed tomography, magnetic resonance, and computed tomography [CT] at our disposal, effectively equivalent to a quadruple-modality scanner. A comparison of serially acquired CT images, with intervening acquisitions on other scanners, demonstrates the ability of the proposed procedures to maintain the rigidity of an anesthetized mouse during transport between scanners. Movement of the bony structures of the mouse was estimated to be 0.62 mm. Soft tissue movement was predominantly the result of the filling (or emptying of the urinary bladder and thus largely constrained to this region. Phantom studies estimate the registration errors for all registration types to be less than 0.5 mm. Functional images using tracers targeted to known structures verify the accuracy of the functional to structural registrations. The procedures are easy to perform and produce robust and accurate results that rival those of dedicated multimodality scanners, but with more flexible registration combinations and while avoiding the expense and redundancy of multimodality systems.
Towards an ultra-thin medical endoscope: multimode fibre as a wide-field image transferring medium
Duriš, Miroslav; Bradu, Adrian; Podoleanu, Adrian; Hughes, Michael
2018-03-01
Multimode optical fibres are attractive for biomedical and industrial applications such as endoscopes because of the small cross section and imaging resolution they can provide in comparison to widely-used fibre bundles. However, the image is randomly scrambled by propagation through a multimode fibre. Even though the scrambling is unpredictable, it is deterministic, and therefore the scrambling can be reversed. To unscramble the image, we treat the multimode fibre as a linear, disordered scattering medium. To calibrate, we scan a focused beam of coherent light over thousands of different beam positions at the distal end and record complex fields at the proximal end of the fibre. This way, the inputoutput response of the system is determined, which then allows computational reconstruction of reflection-mode images. However, there remains the problem of illuminating the tissue via the fibre while avoiding back reflections from the proximal face. To avoid this drawback, we provide here the first preliminary confirmation that an image can be transferred through a 2x2 fibre coupler, with the sample at its distal port interrogated in reflection. Light is injected into one port for illumination and then collected from a second port for imaging.
A Pretargeted Approach for the Multimodal PET/NIRF Imaging of Colorectal Cancer.
Adumeau, Pierre; Carnazza, Kathryn E; Brand, Christian; Carlin, Sean D; Reiner, Thomas; Agnew, Brian J; Lewis, Jason S; Zeglis, Brian M
2016-01-01
The complementary nature of positron emission tomography (PET) and near-infrared fluorescence (NIRF) imaging makes the development of strategies for the multimodal PET/NIRF imaging of cancer a very enticing prospect. Indeed, in the context of colorectal cancer, a single multimodal PET/NIRF imaging agent could be used to stage the disease, identify candidates for surgical intervention, and facilitate the image-guided resection of the disease. While antibodies have proven to be highly effective vectors for the delivery of radioisotopes and fluorophores to malignant tissues, the use of radioimmunoconjugates labeled with long-lived nuclides such as 89 Zr poses two important clinical complications: high radiation doses to the patient and the need for significant lag time between imaging and surgery. In vivo pretargeting strategies that decouple the targeting vector from the radioactivity at the time of injection have the potential to circumvent these issues by facilitating the use of positron-emitting radioisotopes with far shorter half-lives. Here, we report the synthesis, characterization, and in vivo validation of a pretargeted strategy for the multimodal PET and NIRF imaging of colorectal carcinoma. This approach is based on the rapid and bioorthogonal ligation between a trans -cyclooctene- and fluorophore-bearing immunoconjugate of the huA33 antibody (huA33-Dye800-TCO) and a 64 Cu-labeled tetrazine radioligand ( 64 Cu-Tz-SarAr). In vivo imaging experiments in mice bearing A33 antigen-expressing SW1222 colorectal cancer xenografts clearly demonstrate that this approach enables the non-invasive visualization of tumors and the image-guided resection of malignant tissue, all at only a fraction of the radiation dose created by a directly labeled radioimmunoconjugate. Additional in vivo experiments in peritoneal and patient-derived xenograft models of colorectal carcinoma reinforce the efficacy of this methodology and underscore its potential as an innovative and useful
Directory of Open Access Journals (Sweden)
Key J
2016-08-01
Full Text Available Jaehong Key,1,2 Deepika Dhawan,3 Christy L Cooper,3,4 Deborah W Knapp,3 Kwangmeyung Kim,5 Ick Chan Kwon,5 Kuiwon Choi,5 Kinam Park,1,6 Paolo Decuzzi,7–9 James F Leary1,3,41Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA; 2Department of Biomedical Engineering, Yonsei University, Wonju, Republic of Korea; 3School of Veterinary Medicine-Department of Basic Medical Sciences, Purdue University, West Lafayette, 4Birck Nanotechnology Center at Discovery Park, Purdue University, West Lafayette, IN, USA; 5Biomedical Research Center, Korea Institute of Science and Technology, Sungbook-Gu, Seoul, Republic of Korea; 6Department of Pharmaceutics, Purdue University, West Lafayette, IN, 7Department of Translational Imaging, 8Department of Nanomedicine, Houston Methodist Research Institute, Houston, TX USA; 9Laboratory of Nanotechnology for Precision Medicine, Fondazione Istituto Italiano di Tecnologia (IIT, Genova, Italy Abstract: While current imaging modalities, such as magnetic resonance imaging (MRI, computed tomography, and positron emission tomography, play an important role in detecting tumors in the body, no single-modality imaging possesses all the functions needed for a complete diagnostic imaging, such as spatial resolution, signal sensitivity, and tissue penetration depth. For this reason, multimodal imaging strategies have become promising tools for advanced biomedical research and cancer diagnostics and therapeutics. In designing multimodal nanoparticles, the physicochemical properties of the nanoparticles should be engineered so that they successfully accumulate at the tumor site and minimize nonspecific uptake by other organs. Finely altering the nano-scale properties can dramatically change the biodistribution and tumor accumulation of nanoparticles in the body. In this study, we engineered multimodal nanoparticles for both MRI, by using ferrimagnetic nanocubes (NCs, and near infrared fluorescence imaging
Distributed MIMO-ISAR Sub-image Fusion Method
Directory of Open Access Journals (Sweden)
Gu Wenkun
2017-02-01
Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.
Development of a hardware-based registration system for the multimodal medical images by USB cameras
International Nuclear Information System (INIS)
Iwata, Michiaki; Minato, Kotaro; Watabe, Hiroshi; Koshino, Kazuhiro; Yamamoto, Akihide; Iida, Hidehiro
2009-01-01
There are several medical imaging scanners and each modality has different aspect for visualizing inside of human body. By combining these images, diagnostic accuracy could be improved, and therefore, several attempts for multimodal image registration have been implemented. One popular approach is to use hybrid image scanners such as positron emission tomography (PET)/CT and single photon emission computed tomography (SPECT)/CT. However, these hybrid scanners are expensive and not fully available. We developed multimodal image registration system with universal serial bus (USB) cameras, which is inexpensive and applicable to any combinations of existed conventional imaging scanners. The multiple USB cameras will determine the three dimensional positions of a patient while scanning. Using information of these positions and rigid body transformation, the acquired image is registered to the common coordinate which is shared with another scanner. For each scanner, reference marker is attached on gantry of the scanner. For observing the reference marker's position by the USB cameras, the location of the USB cameras can be arbitrary. In order to validate the system, we scanned a cardiac phantom with different positions by PET and MRI scanners. Using this system, images from PET and MRI were visually aligned, and good correlations between PET and MRI images were obtained after the registration. The results suggest this system can be inexpensively used for multimodal image registrations. (author)
Echocardiography in the Era of Multimodality Cardiovascular Imaging
Shah, Benoy Nalin
2013-01-01
Echocardiography remains the most frequently performed cardiac imaging investigation and is an invaluable tool for detailed and accurate evaluation of cardiac structure and function. Echocardiography, nuclear cardiology, cardiac magnetic resonance imaging, and cardiovascular-computed tomography comprise the subspeciality of cardiovascular imaging, and these techniques are often used together for a multimodality, comprehensive assessment of a number of cardiac diseases. This paper provides the general cardiologist and physician with an overview of state-of-the-art modern echocardiography, summarising established indications as well as highlighting advances in stress echocardiography, three-dimensional echocardiography, deformation imaging, and contrast echocardiography. Strengths and limitations of echocardiography are discussed as well as the growing role of real-time three-dimensional echocardiography in the guidance of structural heart interventions in the cardiac catheter laboratory. PMID:23878804
A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion
Zhiqin Zhu; Guanqiu Qi; Yi Chai; Penghua Li
2017-01-01
In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different ...
On the Multi-Modal Object Tracking and Image Fusion Using Unsupervised Deep Learning Methodologies
LaHaye, N.; Ott, J.; Garay, M. J.; El-Askary, H. M.; Linstead, E.
2017-12-01
The number of different modalities of remote-sensors has been on the rise, resulting in large datasets with different complexity levels. Such complex datasets can provide valuable information separately, yet there is a bigger value in having a comprehensive view of them combined. As such, hidden information can be deduced through applying data mining techniques on the fused data. The curse of dimensionality of such fused data, due to the potentially vast dimension space, hinders our ability to have deep understanding of them. This is because each dataset requires a user to have instrument-specific and dataset-specific knowledge for optimum and meaningful usage. Once a user decides to use multiple datasets together, deeper understanding of translating and combining these datasets in a correct and effective manner is needed. Although there exists data centric techniques, generic automated methodologies that can potentially solve this problem completely don't exist. Here we are developing a system that aims to gain a detailed understanding of different data modalities. Such system will provide an analysis environment that gives the user useful feedback and can aid in research tasks. In our current work, we show the initial outputs our system implementation that leverages unsupervised deep learning techniques so not to burden the user with the task of labeling input data, while still allowing for a detailed machine understanding of the data. Our goal is to be able to track objects, like cloud systems or aerosols, across different image-like data-modalities. The proposed system is flexible, scalable and robust to understand complex likenesses within multi-modal data in a similar spatio-temporal range, and also to be able to co-register and fuse these images when needed.
Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin
Lai, Zhenhua
The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system
Spectrally Consistent Satellite Image Fusion with Improved Image Priors
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.
2006-01-01
Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....
Galindo Millan, Jealemy
2012-01-01
In this thesis, new approaches directed towards simple and functional imaging agents (IAs) for magnetic resonance (MR) and fluorescence multimodal imaging are proposed. In Chapter 3, hybrid silver nanostructures (hAgNSs), grown using a polyamino carboxylic acid scaffold, namely
Multi-focus image fusion with the all convolutional neural network
Du, Chao-ben; Gao, She-sheng
2018-01-01
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION
Directory of Open Access Journals (Sweden)
S. Jabari
2017-08-01
Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Application of Sensor Fusion to Improve Uav Image Classification
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Ethernet image communication performance in a multimodal PACS network
International Nuclear Information System (INIS)
Lou, S.L.; Valentino, D.J.; Chan, K.K.; Huang, H.K.
1989-01-01
The authors have evaluated the performance of an Ethernet network in a multimodal picture archiving and communications system (PACS) environment. The study included measurements between Sun workstations and PC- AT computers running communication software at the TCP level. First they initiated image transfers between two workstations, a server and a client. Next, they successively added clients to transfer images to the server and they measured degradation in network performance. Finally, they initiated image transfers between pairs of workstations and again measured performance degradation. The results of the authors' experiments indicate that Ethernet is suitable for image communication only in limited network situations. They discuss how to maximize network performance given these constraints
Deep features for efficient multi-biometric recognition with face and ear images
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
2017-07-01
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant
2008-03-01
Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito
2012-07-01
In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; pcomputer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical
Multispectral analytical image fusion
International Nuclear Information System (INIS)
Stubbings, T.C.
2000-04-01
With new and advanced analytical imaging methods emerging, the limits of physical analysis capabilities and furthermore of data acquisition quantities are constantly pushed, claiming high demands to the field of scientific data processing and visualisation. Physical analysis methods like Secondary Ion Mass Spectrometry (SIMS) or Auger Electron Spectroscopy (AES) and others are capable of delivering high-resolution multispectral two-dimensional and three-dimensional image data; usually this multispectral data is available in form of n separate image files with each showing one element or other singular aspect of the sample. There is high need for digital image processing methods enabling the analytical scientist, confronted with such amounts of data routinely, to get rapid insight into the composition of the sample examined, to filter the relevant data and to integrate the information of numerous separate multispectral images to get the complete picture. Sophisticated image processing methods like classification and fusion provide possible solution approaches to this challenge. Classification is a treatment by multivariate statistical means in order to extract analytical information. Image fusion on the other hand denotes a process where images obtained from various sensors or at different moments of time are combined together to provide a more complete picture of a scene or object under investigation. Both techniques are important for the task of information extraction and integration and often one technique depends on the other. Therefore overall aim of this thesis is to evaluate the possibilities of both techniques regarding the task of analytical image processing and to find solutions for the integration and condensation of multispectral analytical image data in order to facilitate the interpretation of the enormous amounts of data routinely acquired by modern physical analysis instruments. (author)
Fusion of infrared and visible images based on BEMD and NSDFB
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Imaging fusion (SPECT/CT) in degenerative disease of spine
International Nuclear Information System (INIS)
Bernal, P.; Ucros, G.; Bermudez, S.; Ocampo, M.
2007-01-01
Full text: Objective: To determine the utility of Fusion Imaging SPECT/CT in degenerative pathology of the spine and to establish the impact of the use of fusion imaging in spinal pain due to degenerative changes of the spine. Materials and methods: 44 Patients (M=21, F=23) average age of 63 years and with degenerative pathology of spine were sent to Diagnosis Imaging department in FSFB. Bone scintigraphy (SPECT), CT of spine (cervical: 30%, Lumbar 70%) and fusion imaging were performed in all of them. Bone scintigraphy was carried out in a gamma camera Siemens Diacam double head attached to ESOFT computer. The images were acquired in matrix 128 x 128, 20 seg/imag, 64 images. CT of spine was performed same day or two days after in Helycoidal Siemens somatom emotion CT. The fusion was done in a Dicom workstation in sagital, axial and coronal reconstruction. The findings were evaluated by 2 Nuclear Medicine physicians and 2 radiologists of the staff of FSFB in an independent way. Results: Bone scan (SPECT) and CT of 44 patients were evaluated. CT showed facet joint osteoarthrities in 27 (61.3%) patients, uncovertebral joint arthrosis in 7 (15.9%), bulging disc in 9(20.4%), spinal nucleus lesion in 7(15.9%), osteophytes in 9 (20.4%), spinal foraminal stenosis in 7 (15.9%), spondylolysis/spondylolisthesis in 4 (9%). Bone scan showed facet joint osteoarthrities in 29 (65.9%), uncovertebral joint arthrosis in 4 (9%), osteophytes in 9 (20.4%) and normal 3 (6.8%). The imaging fusion showed coincidence findings (main lesion in CT with high uptake in scintigraphy) in 34 patients (77.2%) and no coincidence in 10 (22.8%). In 15 (34.09%) patients the fusion provided additional information. The analysis of the findings of CT and SPECT showed similar results in most of the cases and the fusion didn't provide additional information but it allowed to confirm the findings but when the findings didn't match where the CT showed several findings and SPECT only one area with high uptake
Alternate method for to realize image fusion; Metodo alterno para realizar fusion de imagenes
Energy Technology Data Exchange (ETDEWEB)
Vargas, L; Hernandez, F; Fernandez, R [Departamento de Medicina Nuclear, Imagenologia Diagnostica. Centro Medico de Xalapa, Veracruz (Mexico)
2005-07-01
At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)
Discovery and fusion of salient multimodal features toward news story segmentation
Hsu, Winston; Chang, Shih-Fu; Huang, Chih-Wei; Kennedy, Lyndon; Lin, Ching-Yung; Iyengar, Giridharan
2003-12-01
In this paper, we present our new results in news video story segmentation and classification in the context of TRECVID video retrieval benchmarking event 2003. We applied and extended the Maximum Entropy statistical model to effectively fuse diverse features from multiple levels and modalities, including visual, audio, and text. We have included various features such as motion, face, music/speech types, prosody, and high-level text segmentation information. The statistical fusion model is used to automatically discover relevant features contributing to the detection of story boundaries. One novel aspect of our method is the use of a feature wrapper to address different types of features -- asynchronous, discrete, continuous and delta ones. We also developed several novel features related to prosody. Using the large news video set from the TRECVID 2003 benchmark, we demonstrate satisfactory performance (F1 measures up to 0.76 in ABC news and 0.73 in CNN news), present how these multi-level multi-modal features construct the probabilistic framework, and more importantly observe an interesting opportunity for further improvement.
Facial expression recognition in the wild based on multimodal texture features
Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun
2016-11-01
Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.
Image enhancement using thermal-visible fusion for human detection
Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd
2017-09-01
An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.
Multimodal nonlinear imaging of arabidopsis thaliana root cell
Jang, Bumjoon; Lee, Sung-Ho; Woo, Sooah; Park, Jong-Hyun; Lee, Myeong Min; Park, Seung-Han
2017-07-01
Nonlinear optical microscopy has enabled the possibility to explore inside the living organisms. It utilizes ultrashort laser pulse with long wavelength (greater than 800nm). Ultrashort pulse produces high peak power to induce nonlinear optical phenomenon such as two-photon excitation fluorescence (TPEF) and harmonic generations in the medium while maintaining relatively low average energy pre area. In plant developmental biology, confocal microscopy is widely used in plant cell imaging after the development of biological fluorescence labels in mid-1990s. However, fluorescence labeling itself affects the sample and the sample deviates from intact condition especially when labelling the entire cell. In this work, we report the dynamic images of Arabidopsis thaliana root cells. This demonstrates the multimodal nonlinear optical microscopy is an effective tool for long-term plant cell imaging.
Nuclear medicine and multimodality imaging of pediatric neuroblastoma
Energy Technology Data Exchange (ETDEWEB)
Mueller, Wolfgang Peter; Pfluger, Thomas [Ludwig-Maximilians-University of Munich, Department of Nuclear Medicine, Munich (Germany); Coppenrath, Eva [Ludwig-Maximilians-University of Munich, Department of Radiology, Munich (Germany)
2013-04-15
Neuroblastoma is an embryonic tumor of the peripheral sympathetic nervous system and is metastatic or high risk for relapse in nearly 50% of cases. Therefore, exact staging with radiological and nuclear medicine imaging methods is crucial for defining the adequate therapeutic choice. Tumor cells express the norepinephrine transporter, which makes metaiodobenzylguanidine (MIBG), an analogue of norepinephrine, an ideal tumor specific agent for imaging. MIBG imaging has several disadvantages, such as limited spatial resolution, limited sensitivity in small lesions and the need for two or even more acquisition sessions. Most of these limitations can be overcome with positron emission tomography (PET) using [F-18]2-fluoro-2-deoxyglucose [FDG]. Furthermore, new tracers, such as fluorodopa or somatostatin receptor agonists, have been tested for imaging neuroblastoma recently. However, MIBG scintigraphy and PET alone are not sufficient for operative or biopsy planning. In this regard, a combination with morphological imaging is indispensable. This article will discuss strategies for primary and follow-up diagnosis in neuroblastoma using different nuclear medicine and radiological imaging methods as well as multimodality imaging. (orig.)
A new hyperspectral image compression paradigm based on fusion
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Neutron penumbral imaging of laser-fusion targets
International Nuclear Information System (INIS)
Lerche, R.A.; Ress, D.B.
1988-01-01
Using a new technique, penumbral coded-aperture imaging, the first neutron images of laser-driven, inertial-confinement fusion targets were obtained. With these images the deuterium-tritium burn region within a compressed target can be measured directly. 4 references, 11 figures
International Nuclear Information System (INIS)
Sweeney, R.A.; Seydl, K.; Lukas, P.; Bale, R.J.; Trieb, T.; Moncayo, R.; Donnemiller, E.; Eisner, W.; Burtscher, J.; Stockhammer, G.
2003-01-01
Purpose: To present a simple and precise method of combining functional information of cranial SPECT and PET images with CT and MRI, in any combination. Material and Methods: Imaging is performed with a hockey mask-like reference frame with image modality-specific markers in precisely defined positions. This frame is reproducibly connected to the VBH vacuum mouthpiece, granting objectively identical repositioning of the frame with respect to the cranium. Using these markers, the desired 3-D imaging modalities can then be manually or automatically registered. This information can be used for diagnosis, treatment planning, and evaluation of follow-up, while the same vacuum mouthpiece allows precisely reproducible stereotactic head fixation during radiotherapy. Results: 244 CT and MR data sets of 49 patients were registered to a root square mean error (RSME) of 0.9 mm (mean). 64 SPECT-CT fusions on 18 of these patients gave an RMSE of 1.4 mm, and 40 PET-CT data sets of eight patients were registered to 1.3 mm. An example of the method is given by means of a case report of a 52-year-old patient with bilateral optic nerve meningioma. Conclusion: This technique is a simple, objective and accurate registration tool to combine diagnosis, treatment planning, treatment, and follow-up, all via an individualized vacuum mouthpiece. Especially for low-resolution PET and even more so for some very diffuse SPECT data sets, activity can now be accurately correlated to anatomic structures. (orig.)
A multimodal 3D framework for fire characteristics estimation
Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.
2018-02-01
In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.
T2*-weighted image/T2-weighted image fusion in postimplant dosimetry of prostate brachytherapy
International Nuclear Information System (INIS)
Katayama, Norihisa; Takemoto, Mitsuhiro; Yoshio, Kotaro
2011-01-01
Computed tomography (CT)/magnetic resonance imaging (MRI) fusion is considered to be the best method for postimplant dosimetry of permanent prostate brachytherapy; however, it is inconvenient and costly. In T2 * -weighted image (T2 * -WI), seeds can be easily detected without the use of an intravenous contrast material. We present a novel method for postimplant dosimetry using T2 * -WI/T2-weighted image (T2-WI) fusion. We compared the outcomes of T2 * -WI/T2-WI fusion-based and CT/T2-WI fusion-based postimplant dosimetry. Between April 2008 and July 2009, 50 consecutive prostate cancer patients underwent brachytherapy. All the patients were treated with 144 Gy of brachytherapy alone. Dose-volume histogram (DVH) parameters (prostate D90, prostate V100, prostate V150, urethral D10, and rectal D2cc) were prospectively compared between T2 * -WI/T2-WI fusion-based and CT/T2-WI fusion-based dosimetry. All the DVH parameters estimated by T2 * -WI/T2-WI fusion-based dosimetry strongly correlated to those estimated by CT/T2-WI fusion-based dosimetry (0.77≤ R ≤0.91). No significant difference was observed in these parameters between the two methods, except for prostate V150 (p=0.04). These results show that T2 * -WI/T2-WI fusion-based dosimetry is comparable or superior to MRI-based dosimetry as previously reported, because no intravenous contrast material is required. For some patients, rather large differences were observed in the value between the 2 methods. We thought these large differences were a result of seed miscounts in T2 * -WI and shifts in fusion. Improving the image quality of T2 * -WI and the image acquisition speed of T2 * -WI and T2-WI may decrease seed miscounts and fusion shifts. Therefore, in the future, T2 * -WI/T2-WI fusion may be more useful for postimplant dosimetry of prostate brachytherapy. (author)
Multi-modality molecular imaging: pre-clinical laboratory configuration
Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.
2006-02-01
In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.
Research and Realization of Medical Image Fusion Based on Three-Dimensional Reconstruction
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A new medical image fusion technique is presented. The method is based on three-dimensional reconstruction. After reconstruction, the three-dimensional volume data is normalized by three-dimensional coordinate conversion in the same way and intercepted through setting up cutting plane including anatomical structure, as a result two images in entire registration on space and geometry are obtained and the images are fused at last.Compared with traditional two-dimensional fusion technique, three-dimensional fusion technique can not only resolve the different problems existed in the two kinds of images, but also avoid the registration error of the two kinds of images when they have different scan and imaging parameter. The research proves this fusion technique is more exact and has no registration, so it is more adapt to arbitrary medical image fusion with different equipments.
Medical images fusion for application in treatment planning systems in radiotherapy
International Nuclear Information System (INIS)
Ros, Renato Assenci
2006-01-01
Software for medical images fusion was developed for utilization in CAT3D radiotherapy and MNPS radiosurgery treatment planning systems. A mutual information maximization methodology was used to make the image registration of different modalities by measure of the statistical dependence between the voxels pairs. The alignment by references points makes an initial approximation to the non linear optimization process by downhill simplex method for estimation of the joint histogram. The coordinates transformation function use a trilinear interpolation and search for the global maximum value in a 6 dimensional space, with 3 degree of freedom for translation and 3 degree of freedom for rotation, by making use of the rigid body model. This method was evaluated with CT, MR and PET images from Vanderbilt University database to verify its accuracy by comparison of transformation coordinates of each images fusion with gold-standard values. The median of images alignment error values was 1.6 mm for CT-MR fusion and 3.5 mm for PET-MR fusion, with gold-standard accuracy estimated as 0.4 mm for CT-MR fusion and 1.7 mm for PET-MR fusion. The maximum error values were 5.3 mm for CT-MR fusion and 7.4 mm for PET-MR fusion, and 99.1% of alignment errors were images subvoxels values. The mean computing time was 24 s. The software was successfully finished and implemented in 59 radiotherapy routine services, of which 42 are in Brazil and 17 are in Latin America. This method does not have limitation about different resolutions from images, pixels sizes and slice thickness. Besides, the alignment may be accomplished by axial, coronal or sagittal images. (author)
Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume
2016-01-01
International audience; Common approaches to problems involving multiple modalities (classification, retrieval, hyperlinking, etc.) are early fusion of the initial modalities and crossmodal translation from one modality to the other. Recently, deep neural networks, especially deep autoencoders, have proven promising both for crossmodal translation and for early fusion via multimodal embedding. In this work, we propose a flexible cross-modal deep neural network architecture for multimodal and ...
Multimodal image registration based on binary gradient angle descriptor.
Jiang, Dongsheng; Shi, Yonghong; Yao, Demin; Fan, Yifeng; Wang, Manning; Song, Zhijian
2017-12-01
Multimodal image registration plays an important role in image-guided interventions/therapy and atlas building, and it is still a challenging task due to the complex intensity variations in different modalities. The paper addresses the problem and proposes a simple, compact, fast and generally applicable modality-independent binary gradient angle descriptor (BGA) based on the rationale of gradient orientation alignment. The BGA can be easily calculated at each voxel by coding the quadrant in which a local gradient vector falls, and it has an extremely low computational complexity, requiring only three convolutions, two multiplication operations and two comparison operations. Meanwhile, the binarized encoding of the gradient orientation makes the BGA more resistant to image degradations compared with conventional gradient orientation methods. The BGA can extract similar feature descriptors for different modalities and enable the use of simple similarity measures, which makes it applicable within a wide range of optimization frameworks. The results for pairwise multimodal and monomodal registrations between various images (T1, T2, PD, T1c, Flair) consistently show that the BGA significantly outperforms localized mutual information. The experimental results also confirm that the BGA can be a reliable alternative to the sum of absolute difference in monomodal image registration. The BGA can also achieve an accuracy of [Formula: see text], similar to that of the SSC, for the deformable registration of inhale and exhale CT scans. Specifically, for the highly challenging deformable registration of preoperative MRI and 3D intraoperative ultrasound images, the BGA achieves a similar registration accuracy of [Formula: see text] compared with state-of-the-art approaches, with a computation time of 18.3 s per case. The BGA improves the registration performance in terms of both accuracy and time efficiency. With further acceleration, the framework has the potential for
Gold Nanoconstructs for Multimodal Diagnostic Imaging and Photothermal Cancer Therapy
Coughlin, Andrew James
Cancer accounts for nearly 1 out of every 4 deaths in the United States, and because conventional treatments are limited by morbidity and off-target toxicities, improvements in cancer management are needed. This thesis further develops nanoparticle-assisted photothermal therapy (NAPT) as a viable treatment option for cancer patients. NAPT enables localized ablation of disease because heat generation only occurs where tissue permissive near-infrared (NIR) light and absorbing nanoparticles are combined, leaving surrounding normal tissue unharmed. Two principle approaches were investigated to improve the specificity of this technique: multimodal imaging and molecular targeting. Multimodal imaging affords the ability to guide NIR laser application for site-specific NAPT and more holistic characterization of disease by combining the advantages of several diagnostic technologies. Towards the goal of image-guided NAPT, gadolinium-conjugated gold-silica nanoshells were engineered and demonstrated to enhance imaging contrast across a range of diagnostic modes, including T1-weighted magnetic resonance imaging, X-Ray, optical coherence tomography, reflective confocal microscopy, and two-photon luminescence in vitro as well as within an animal tumor model. Additionally, the nanoparticle conjugates were shown to effectively convert NIR light to heat for applications in photothermal therapy. Therefore, the broad utility of gadolinium-nanoshells for anatomic localization of tissue lesions, molecular characterization of malignancy, and mediators of ablation was established. Molecular targeting strategies may also improve NAPT by promoting nanoparticle uptake and retention within tumors and enhancing specificity when malignant and normal tissue interdigitate. Here, ephrinA1 protein ligands were conjugated to nanoshell surfaces for particle homing to overexpressed EphA2 receptors on prostate cancer cells. In vitro, successful targeting and subsequent photothermal ablation of
Mhiri, Aida; El Bez, Intidhar; Slim, Ihsen; Meddeb, Imène; Yeddes, Imene; Ghezaiel, Mohamed; Gritli, Saïd; Ben Slimène, Mohamed Faouzi
2013-10-01
Single photon emission computed tomography combined with a low dose computed tomography (SPECT-CT), is a hybrid imaging integrating functional and anatomical data. The purpose of our study was to evaluate the contribution of the SPECTCT over traditional planar imaging of patients with differentiated thyroid carcinoma (DTC). Post therapy 131IWhole body scan followed by SPECTCT of the neck and thorax, were performed in 156 patients with DTC. Among these 156 patients followed for a predominantly papillary, the use of fusion imaging SPECT-CT compared to conventional planar imaging allowed us to correct our therapeutic approach in 26.9 % (42/156 patients), according to the protocols of therapeutic management of our institute. SPECT-CT is a multimodal imaging providing better identification and more accurate anatomic localization of the foci of radioiodine uptake with impact on therapeutic management.
Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei
2018-05-01
To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC
International Nuclear Information System (INIS)
Zhang Xiangsong; He Zuoxiang
2004-01-01
Objective: To establish the method of three dimension volumetric fusion of emission and transmission images for PET imaging. Methods: The volume data of emission and transmission images acquired with Siemens ECAT HR + PET scanner were transferred to PC computer by local area network. The PET volume data were converted into 8 bit byte type, and scaled to the range of 0-255. The data coordinates of emission and transmission images were normalized by three-dimensional coordinate conversion in the same way. The images were fused with the mode of alpha-blending. The accuracy of image fusion was confirmed by its clinical application in 13 cases. Results: The three dimension volumetric fusion of emission and transmission images clearly displayed the silhouette and anatomic configuration in chest, including chest wall, lung, heart, mediastinum, et al. Forty-eight lesions in chest in 13 cases were accurately located by the image fusion. Conclusions: The volume data of emission and transmission images acquired with Siemens ECAT HR + PET scanner have the same data coordinate. The three dimension fusion software can conveniently used for the three dimension volumetric fusion of emission and transmission images, and also can correctly locate the lesions in chest
Label-free imaging of arterial cells and extracellular matrix using a multimodal CARS microscope
Wang, Han-Wei; Le, Thuc T.; Cheng, Ji-Xin
2008-04-01
A multimodal nonlinear optical imaging system that integrates coherent anti-Stokes Raman scattering (CARS), sum-frequency generation (SFG), and two-photon excitation fluorescence (TPEF) on the same platform was developed and applied to visualize single cells and extracellular matrix in fresh carotid arteries. CARS signals arising from CH 2-rich membranes allowed visualization of endothelial cells and smooth muscle cells of the arterial wall. Additionally, CARS microscopy allowed vibrational imaging of elastin and collagen fibrils which are also rich in CH 2 bonds. The extracellular matrix organization was further confirmed by TPEF signals arising from elastin's autofluorescence and SFG signals arising from collagen fibrils' non-centrosymmetric structure. Label-free imaging of significant components of arterial tissues suggests the potential application of multimodal nonlinear optical microscopy to monitor onset and progression of arterial diseases.
Multimodal Discourse Analysis of the Movie "Argo"
Bo, Xu
2018-01-01
Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…
Multimodal imaging of vascular network and blood microcirculation by optical diagnostic techniques
International Nuclear Information System (INIS)
Kuznetsov, Yu L; Kalchenko, V V; Meglinski, I V
2011-01-01
We present a multimodal optical diagnostic approach for simultaneous non-invasive in vivo imaging of blood and lymphatic microvessels, utilising a combined use of fluorescence intravital microscopy and a method of dynamic light scattering. This approach makes it possible to renounce the use of fluorescent markers for visualisation of blood vessels and, therefore, significantly (tenfold) reduce the toxicity of the technique and minimise side effects caused by the use of contrast fluorescent markers. We demonstrate that along with the ability to obtain images of lymph and blood microvessels with a high spatial resolution, current multimodal approach allows one to observe in real time permeability of blood vessels. This technique appears to be promising in physiology studies of blood vessels, and especially in the study of peripheral cardiovascular system in vivo. (optical technologies in biophysics and medicine)
Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg
2015-01-01
OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain
Effective Multifocus Image Fusion Based on HVS and BP Neural Network
Directory of Open Access Journals (Sweden)
Yong Yang
2014-01-01
Full Text Available The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS and back propagation (BP neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.
SAR Data Fusion Imaging Method Oriented to Target Feature Extraction
Directory of Open Access Journals (Sweden)
Yang Wei
2015-02-01
Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.
Analyzer-based imaging of spinal fusion in an animal model
International Nuclear Information System (INIS)
Kelly, M E; Beavis, R C; Allen, L A; Fiorella, David; Schueltke, E; Juurlink, B H; Chapman, L D; Zhong, Z
2008-01-01
Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs
Analyzer-based imaging of spinal fusion in an animal model
Kelly, M. E.; Beavis, R. C.; Fiorella, David; Schültke, E.; Allen, L. A.; Juurlink, B. H.; Zhong, Z.; Chapman, L. D.
2008-05-01
Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs.
MMX-I: A data-processing software for multi-modal X-ray imaging and tomography
International Nuclear Information System (INIS)
Bergamaschi, A; Medjoubi, K; Somogyi, A; Messaoudi, C; Marco, S
2017-01-01
Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data. (paper)
MMX-I: A data-processing software for multi-modal X-ray imaging and tomography
Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.
2017-06-01
Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.
Fusion of Geophysical Images in the Study of Archaeological Sites
Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.
2011-12-01
This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image
Research on fusion algorithm of polarization image in tetrolet domain
Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing
2015-12-01
Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect
External marker-based fusion of functional and morphological images
International Nuclear Information System (INIS)
Kremp, S.; Schaefer, A.; Alexander, C.; Kirsch, C.M.
1999-01-01
The fusion of image data resulting from methods oriented toward morphology like CT, MRI with functional information coming from nuclear medicine (SPECT, PET) is frequently applied to allow for a better association between functional findings and anatomical structures. A new software was developed to provide image fusion using PET, SPECT, MRI and CT data within a short processing periode for brain as well as whole body examinations in particular thorax and abdomen. The software utilizes external markers (brain) or anatomical landmarks (thorax) for correlation. The fusion requires a periode of approx. 15 min. The examples shown emphasize the high gain in diagnostic information by fusing image data of anatomical and functional methods. (orig.) [de
Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification
Institute of Scientific and Technical Information of China (English)
Xia; JING; Yan; BAO
2015-01-01
Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.
Cook, Jason R.; Dumani, Diego S.; Kubelick, Kelsey P.; Luci, Jeffrey; Emelianov, Stanislav Y.
2017-03-01
Imaging modalities utilize contrast agents to improve morphological visualization and to assess functional and molecular/cellular information. Here we present a new type of nanometer scale multi-functional particle that can be used for multi-modal imaging and therapeutic applications. Specifically, we synthesized monodisperse 20 nm Prussian Blue Nanocubes (PBNCs) with desired optical absorption in the near-infrared region and superparamagnetic properties. PBNCs showed excellent contrast in photoacoustic (700 nm wavelength) and MR (3T) imaging. Furthermore, photostability was assessed by exposing the PBNCs to nearly 1,000 laser pulses (5 ns pulse width) with up to 30 mJ/cm2 laser fluences. The PBNCs exhibited insignificant changes in photoacoustic signal, demonstrating enhanced robustness compared to the commonly used gold nanorods (substantial photodegradation with fluences greater than 5 mJ/cm2). Furthermore, the PBNCs exhibited superparamagnetism with a magnetic saturation of 105 emu/g, a 5x improvement over superparamagnetic iron-oxide (SPIO) nanoparticles. PBNCs exhibited enhanced T2 contrast measured using 3T clinical MRI. Because of the excellent optical absorption and magnetism, PBNCs have potential uses in other imaging modalities including optical tomography, microscopy, magneto-motive OCT/ultrasound, etc. In addition to multi-modal imaging, the PBNCs are multi-functional and, for example, can be used to enhance magnetic delivery and as therapeutic agents. Our initial studies show that stem cells can be labeled with PBNCs to perform image-guided magnetic delivery. Overall, PBNCs can act as imaging/therapeutic agents in diverse applications including cancer, cardiovascular disease, ophthalmology, and tissue engineering. Furthermore, PBNCs are based on FDA approved Prussian Blue thus potentially easing clinical translation of PBNCs.
Mannan-based conjugates as a multimodal imaging platform for lymph nodes
Czech Academy of Sciences Publication Activity Database
Rabyk, Mariia; Galisová, A.; Jirátová, M.; Patsula, Vitalii; Srbová, Linda; Loukotová, Lenka; Parnica, Jozef; Jirák, D.; Štěpánek, Petr; Hrubý, Martin
2018-01-01
Roč. 6, č. 17 (2018), s. 2584-2596 ISSN 2050-750X R&D Projects: GA MZd(CZ) NV15-25781A Institutional support: RVO:61389013 Keywords : polysaccharide modification * mannan * multimodal imaging Subject RIV: FR - Pharmacology ; Medidal Chemistry OBOR OECD: Pharmacology and pharmacy Impact factor: 4.543, year: 2016
Energy Technology Data Exchange (ETDEWEB)
Bosque-Freeman, L.; Leroy, C.; Galanaud, D.; Sureau, F.; Assouad, R.; Tourbah, A.; Papeix, C.; Comtat, C.; Trebossen, R.; Lubetzki, C.; Delforge, J.; Bottlaender, M.; Stankoff, B. [Serv. Hosp. Frederic Joliot, Orsay (France)
2009-07-01
Objective: To assess neuronal damage in deep gray matter structures by positron emission tomography (PET) using [{sup 11}C]-flumazenil (FMZ), a specific central benzodiazepine receptor antagonist, and [{sup 18}F]-fluorodeoxyglucose (FDG), which reflects neuronal metabolism. To compare results obtained by PET and those with multimodal magnetic resonance imaging (MRI). Background: It is now accepted that neuronal injury plays a crucial role in the occurrence and progression of neurological disability in multiple sclerosis (MS). To date, available MRI techniques do not specifically assess neuronal damage, but early abnormalities, such as iron deposition or atrophy, have been described in deep gray matter structures. Whether those MRI modifications correspond to neuronal damage remains to be further investigated. Materials and methods: Nine healthy volunteers were compared to 10 progressive and 9 relapsing remitting (RR) MS patients. Each subject performed two PET examinations with [{sup 11}C]-FMZ and [{sup 18}F]-FDG, on a high resolution research tomograph dedicated to brain imaging (Siemens Medical Solution, spatial resolution of 2.5 mm). Deep gray matter regions were manually segmented on T1-weighted MR images with the mutual information algorithm (www.brainvisa.info), and co-registered with PET images. A multimodal MRI including T1 pre and post gadolinium, T2-proton density sequences, magnetization transfer, diffusion tensor, and protonic spectroscopy was also performed for each subject. Results: On PET with [{sup 11}C]-FMZ, there was a pronounced decrease in receptor density for RR patients in all deep gray matter structures investigated, whereas the density was unchanged or even increased in the same regions for progressive patients. Whether the different patterns between RR and progressive patients reflect distinct pathogenic mechanisms is currently investigated by comparing PET and multimodal MRI results. Conclusion: Combination of PET and multimodal MR imaging
Fusion of Multimodal Biometrics using Feature and Score Level Fusion
Mohana Prakash, S.; Betty, P.; Sivanarulselvan, K.
2016-01-01
Biometrics is used to uniquely identify a person‘s individual based on physical and behavioural characteristics. Unimodal biometric system contains various problems such as degree of freedom, spoof attacks, non-universality, noisy data and error rates. Multimodal biometrics is introduced to overcome the limitations in Unimodal biometrics. The presented methodology extracts the features of four biometric traits such as fingerprint, palm, iris and retina. Then extracted features are fused in th...
Fusion method of SAR and optical images for urban object extraction
Jia, Yonghong; Blum, Rick S.; Li, Fangfang
2007-11-01
A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.
Fourier domain image fusion for differential X-ray phase-contrast breast imaging
International Nuclear Information System (INIS)
Coello, Eduardo; Sperl, Jonathan I.; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne
2017-01-01
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well.
Fourier domain image fusion for differential X-ray phase-contrast breast imaging
Energy Technology Data Exchange (ETDEWEB)
Coello, Eduardo, E-mail: eduardo.coello@tum.de [GE Global Research, Garching (Germany); Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Institut für Informatik, Technische Universität München, Garching (Germany); Sperl, Jonathan I.; Bequé, Dirk [GE Global Research, Garching (Germany); Benz, Tobias [Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Institut für Informatik, Technische Universität München, Garching (Germany); Scherer, Kai; Herzen, Julia [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, Garching (Germany); Sztrókay-Gaul, Anikó; Hellerhoff, Karin [Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital, Munich (Germany); Pfeiffer, Franz [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, Garching (Germany); Cozzini, Cristina [GE Global Research, Garching (Germany); Grandl, Susanne [Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital, Munich (Germany)
2017-04-15
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well.
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Fusion and Sense Making of Heterogeneous Sensor Network and Other Sources
2017-03-16
two benchmark datasets: UIUC- Sport dataset, LabelMe8 dataset. Comparisons of our method with other state-of-the- art methods are conducted. (1...of-the- art feature descriptors upon the UIUC- Sport dataset when they are used alone or combined with web resources under the multimodal fusion...web textual resources aided image classification framework can improve classification accuracy of some classes by 13% and 12% in the UIUC- Sports and
Multimodal biometric approach for cancelable face template generation
Paul, Padma Polash; Gavrilova, Marina
2012-06-01
Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure the biometric identification and verification system. We present a novel technique for robust cancelable template generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence improving the performance of the system. We have tested the system for classifier fusion for different feature subset and different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the biometric system compared with the original template.
Multimodal surveillance sensors, algorithms, and systems
Zhu, Zhigang
2007-01-01
From front-end sensors to systems and environmental issues, this practical resource guides you through the many facets of multimodal surveillance. The book examines thermal, vibration, video, and audio sensors in a broad context of civilian and military applications. This cutting-edge volume provides an in-depth treatment of data fusion algorithms that takes you to the core of multimodal surveillance, biometrics, and sentient computing. The book discusses such people and activity topics as tracking people and vehicles and identifying individuals by their speech.Systems designers benefit from d
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Directory of Open Access Journals (Sweden)
Naveed ur Rehman
2015-05-01
Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Zhou, Jing; Zhu, Xingjun; Chen, Min; Sun, Yun; Li, Fuyou
2012-09-01
Multimodal imaging is rapidly becoming an important tool for biomedical applications because it can compensate for the deficiencies of individual imaging modalities. Herein, multifunctional NaLuF(4)-based upconversion nanoparticles (Lu-UCNPs) were synthesized though a facile one-step microemulsion method under ambient condition. The doping of lanthanide ions (Gd(3+), Yb(3+) and Er(3+)/Tm(3+)) endows the Lu-UCNPs with high T(1)-enhancement, bright upconversion luminescence (UCL) emissions, and excellent X-ray absorption coefficient. Moreover, the as-prepared Lu-UCNPs are stable in water for more than six months, due to the protection of sodium glutamate and diethylene triamine pentacetate acid (DTPA) coordinating ligands on the surface. Lu-UCNPs have been successfully applied to the trimodal CT/MR/UCL lymphatic imaging on the modal of small animals. It is worth noting that Lu-UCNPs could be used for imaging even after preserving for over six months. In vitro transmission electron microscope (TEM), methyl thiazolyl tetrazolium (MTT) assay and histological analysis demonstrated that Lu-UCNPs exhibited low toxicity on living systems. Therefore, Lu-UCNPs could be multimodal agents for CT/MR/UCL imaging, and the concept can be served as a platform technology for the next-generation of probes for multimodal imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.
Development of technology for medical image fusion
International Nuclear Information System (INIS)
Yamaguchi, Takashi; Amano, Daizou
2012-01-01
With entry into a field of medical diagnosis in mind, we have developed positron emission tomography (PET) ''MIP-100'' system, of which spatial resolution is far higher than the conventional one, using semiconductor detectors for preclinical imaging for small animals. In response to the recently increasing market demand to fuse functional images by PET and anatomical ones by CT or MRI, we have been developing software to implement image fusion function that enhances marketability of the PET Camera. This paper describes the method of fusing with high accuracy the PET images and anatomical ones by CT system. It also explains that a computer simulation proved the image overlay accuracy to be ±0.3 mm as a result of the development, and that effectiveness of the developed software is confirmed in case of experiment to obtain measured data. Achieving such high accuracy as ±0.3 mm by the software allows us to present fusion images with high resolution (<0.6 mm) without degrading the spatial resolution (<0.5 mm) of the PET system using semiconductor detectors. (author)
Multimodality imaging in macular telangiectasia 2: A clue to its pathogenesis
Directory of Open Access Journals (Sweden)
Lihteh Wu
2015-01-01
Full Text Available Macular telangiectasia type 2 also known as idiopathic perifoveal telangiectasia and juxtafoveolar retinal telangiectasis type 2A is an acquired bilateral neurodegenerative macular disease that manifests itself during the fifth or sixth decades of life. It is characterized by minimal dilatation of the parafoveal capillaries with graying of the retinal area involved, a lack of lipid exudation, right-angled retinal venules, refractile deposits in the superficial retina, hyperplasia of the retinal pigment epithelium, foveal atrophy, and subretinal neovascularization (SRNV. Our understanding of the disease has paralleled advances in multimodality imaging of the fundus. Optical coherence tomography (OCT images typically demonstrate the presence of intraretinal hyporeflective spaces that are usually not related to retinal thickening or fluorescein leakage. The typical fluorescein angiographic (FA finding is a deep intraretinal hyperfluorescent staining in the temporal parafoveal area. With time, the staining may involve the whole parafoveal area but does not extend to the center of the fovea. Long-term prognosis for central vision is poor, because of the development of SRNV or macular atrophy. Its pathogenesis remains unclear but multimodality imaging with FA, spectral domain OCT, adaptive optics, confocal blue reflectance and short wave fundus autofluorescence implicate Müller cells and macular pigment. Currently, there is no known treatment for this condition.
Directory of Open Access Journals (Sweden)
P. Bhattacharya
2007-11-01
Full Text Available To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i casual or contextual feature, (ii contact feature, (iii contactless feature, and (iv performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA, is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue. We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.
Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-03-31
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.
Evaluation of registration strategies for multi-modality images of rat brain slices
International Nuclear Information System (INIS)
Palm, Christoph; Vieten, Andrea; Salber, Dagmar; Pietrzyk, Uwe
2009-01-01
In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.
Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen
2017-11-01
Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.
Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation
Pelapur, Rengarajan; Prasath, Surya; Palaniappan, Kannappan
2014-01-01
We are building a computerized image analysis system for Dura Mater vascular network from fluorescence microscopy images. We propose a system that couples a multi-focus image fusion module with a robust adaptive filtering based segmentation. The robust adaptive filtering scheme handles noise without destroying small structures, and the multi focal image fusion considerably improves the overall segmentation quality by integrating information from multiple images. Based on the segmenta...
Adaptive polarization image fusion based on regional energy dynamic weighted average
Institute of Scientific and Technical Information of China (English)
ZHAO Yong-qiang; PAN Quan; ZHANG Hong-cai
2005-01-01
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
Directory of Open Access Journals (Sweden)
Hui LIN
2014-12-01
Full Text Available An improved fusion algorithm for multi-source remote sensing images with high spatial resolution and multi-spectral capacity is proposed based on traditional IHS fusion and grey correlation analysis. Firstly, grey absolute correlation degree is used to discriminate non-edge pixels and edge pixels in high-spatial resolution images, by which the weight of intensity component is identified in order to combine it with high-spatial resolution image. Therefore, image fusion is achieved using IHS inverse transform. The proposed method is applied to ETM+ multi-spectral images and panchromatic image, and Quickbird’s multi-spectral images and panchromatic image respectively. The experiments prove that the fusion method proposed in the paper can efficiently preserve spectral information of the original multi-spectral images while enhancing spatial resolution greatly. By comparison and analysis, the proposed fusion algorithm is better than traditional IHS fusion and fusion method based on grey correlation analysis and IHS transform.
Research on Methods of Infrared and Color Image Fusion Based on Wavelet Transform
Directory of Open Access Journals (Sweden)
Zhao Rentao
2014-06-01
Full Text Available There is significant difference in the imaging features of infrared image and color image, but their fusion images also have very good complementary information. In this paper, based on the characteristics of infrared image and color image, first of all, wavelet transform is applied to the luminance component of the infrared image and color image. In multi resolution the relevant regional variance is regarded as the activity measure, relevant regional variance ratio as the matching measure, and the fusion image is enhanced in the process of integration, thus getting the fused images by final synthesis module and multi-resolution inverse transform. The experimental results show that the fusion image obtained by the method proposed in this paper is better than the other methods in keeping the useful information of the original infrared image and the color information of the original color image. In addition, the fusion image has stronger adaptability and better visual effect.
A color fusion method of infrared and low-light-level images based on visual perception
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Three dimensional image alignment, registration and fusion
International Nuclear Information System (INIS)
Treves, S.T.; Mitchell, K.D.; Habboush, I.H.
1998-01-01
Combined assessment of three dimensional anatomical and functional images (SPECT, PET, MRI, CT) is useful to determine the nature and extent of lesions in many parts of the body. Physicians principally rely on their spatial sense of mentally re-orient and overlap images obtained with different imaging modalities. Objective methods that enable easy and intuitive image registration can help the physician arrive at more optimal diagnoses and better treatment decisions. This review describes a simple, intuitive and robust image registration approach developed in our laboratory. It differs from most other registration techniques in that it allows the user to incorporate all of the available information within the images in the registration process. This method takes full advantage of the ability of knowledgeable operators to achieve image registration and fusion using an intuitive interactive visual approach. It can register images accurately and quickly without the use of elaborate mathematical modeling or optimization techniques. The method provides the operator with tools to manipulate images in three dimensions, including visual feedback techniques to assess the accuracy of registration (grids, overlays, masks, and fusion of images in different colors). Its application is not limited to brain imaging and can be applied to images from any region in the body. The overall effect is a registration algorithm that is easy to implement and can achieve accuracy on the order of one pixel
Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion
Directory of Open Access Journals (Sweden)
Kan Ren
2014-01-01
Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.
Ultrasound and PET-CT image fusion for prostate brachytherapy image guidance
International Nuclear Information System (INIS)
Hasford, F.
2015-01-01
Fusion of medical images between different cross-sectional modalities is widely used, mostly where functional images are fused with anatomical data. Ultrasound has for some time now been the standard imaging technique used for treatment planning of prostate cancer cases. While this approach is laudable and has yielded some positive results, latest developments have been the integration of images from ultrasound and other modalities such as PET-CT to compliment missing properties of ultrasound images. This study has sought to enhance diagnosis and treatment of prostate cancers by developing MATLAB algorithms to fuse ultrasound and PET-CT images. The fused ultrasound-PET-CT image has shown to contain improved quality of information than the individual input images. The fused image has the property of reduced uncertainty, increased reliability, robust system performance, and compact representation of information. The objective of co-registering the ultrasound and PET-CT images was achieved by conducting performance evaluation of the ultrasound and PET-CT imaging systems, developing image contrast enhancement algorithm, developing MATLAB image fusion algorithm, and assessing accuracy of the fusion algorithm. Performance evaluation of the ultrasound brachytherapy system produced satisfactory results in accordance with set tolerances as recommended by AAPM TG 128. Using an ultrasound brachytherapy quality assurance phantom, average axial distance measurement of 10.11 ± 0.11 mm was estimated. Average lateral distance measurements of 10.08 ± 0.07 mm, 20.01 ± 0.06 mm, 29.89 ± 0.03 mm and 39.84 ± 0.37 mm were estimated for the inter-target distances corresponding to 10 mm, 20 mm, 30 mm and 40 mm respectively. Volume accuracy assessment produced measurements of 3.97 cm 3 , 8.86 cm 3 and 20.11 cm 3 for known standard volumes of 4 cm 3 , 9 cm 3 and 20 cm 3 respectively. Depth of penetration assessment of the ultrasound system produced an estimate of 5.37 ± 0.02 cm
International Nuclear Information System (INIS)
Zhang Mutian; Huang Minming; Le, Carl; Zanzonico, Pat B; Ling, C Clifton; Koutcher, Jason A; Humm, John L; Claus, Filip; Kolbert, Katherine S; Martin, Kyle
2008-01-01
Dedicated small-animal imaging devices, e.g. positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI) scanners, are being increasingly used for translational molecular imaging studies. The objective of this work was to determine the positional accuracy and precision with which tumors in situ can be reliably and reproducibly imaged on dedicated small-animal imaging equipment. We designed, fabricated and tested a custom rodent cradle with a stereotactic template to facilitate registration among image sets. To quantify tumor motion during our small-animal imaging protocols, 'gold standard' multi-modality point markers were inserted into tumor masses on the hind limbs of rats. Three types of imaging examination were then performed with the animals continuously anesthetized and immobilized: (i) consecutive microPET and MR images of tumor xenografts in which the animals remained in the same scanner for 2 h duration, (ii) multi-modality imaging studies in which the animals were transported between distant imaging devices and (iii) serial microPET scans in which the animals were repositioned in the same scanner for subsequent images. Our results showed that the animal tumor moved by less than 0.2-0.3 mm over a continuous 2 h microPET or MR imaging session. The process of transporting the animal between instruments introduced additional errors of ∼0.2 mm. In serial animal imaging studies, the positioning reproducibility within ∼0.8 mm could be obtained.
International Nuclear Information System (INIS)
Cuocolo, Alberto; Breatnach, Eamann
2010-01-01
Multimodality imaging represents an area of rapid growth with important professional implication for both nuclear medicine physicians and radiologists throughout Europe. As a preliminary step for future action aimed at improving the quality and accessibility of PET/SPECT/CT multimodality imaging practice in Europe, the European Association of Nuclear Medicine (EANM) and the European Society of Radiology (ESR) performed a survey among the individual membership of both societies to obtain information on the status of multimodality imaging in their facilities and their future visions on training for combined modalities. A questionnaire was forwarded to all individual members of the EANM and ESR. The main subject matter of the questionnaire related to: (1) study performance, current procedures, current equipment including its supervisory personnel at respondents' individual facilities and (2) vision of future practice, performance and the potential for combined interdisciplinary viewing and training for future professionals. The reporting and the billing procedures of multimodality imaging studies are very heterogeneous in European countries. The majority of the members of both societies believe that the proportion of PET/CT conducted as a full diagnostic CT with contrast enhancement will increase over time. As expected, 18 F-FDG is the most commonly used PET tracer for clinical applications. The large majority of respondents were in favour of an interdisciplinary training programme being developed on a European level together by the EANM and the ESR and the respective sections of the European Union of Medical Specialists. The results of this survey show that there is wide heterogeneity in the current practice of multimodality imaging in Europe. This situation may limit the full potential and integration of multimodality imaging within the clinical arena. There is a strong desire within both specialties for the development of interdisciplinary training to address some
Ribeiro, André Santos; Lacerda, Luís Miguel; Silva, Nuno André da; Ferreira, Hugo Alexandre
2015-06-01
The Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox is a fully automated all-in-one connectivity analysis toolbox that offers both pre-processing, connectivity, and graph theory analysis of multimodal images such as anatomical, diffusion, and functional MRI, and PET. In this work, the MIBCA functionalities were used to study Alzheimer's Disease (AD) in a multimodal MR/PET approach. Materials and Methods: Data from 12 healthy controls, and 36 patients with EMCI, LMCI and AD (12 patients for each group) were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), including T1-weighted (T1-w), Diffusion Tensor Imaging (DTI) data, and 18F-AV-45 (florbetapir) dynamic PET data from 40-60 min post injection (4x5 min). Both MR and PET data were automatically pre-processed for all subjects using MIBCA. T1-w data was parcellated into cortical and subcortical regions-of-interest (ROIs), and the corresponding thicknesses and volumes were calculated. DTI data was used to compute structural connectivity matrices based on fibers connecting pairs of ROIs. Lastly, dynamic PET images were summed, and the relative Standard Uptake Values calculated for each ROI. Results: An overall higher uptake of 18F-AV-45, consistent with an increased deposition of beta-amyloid, was observed for the AD group. Additionally, patients showed significant cortical atrophy (thickness and volume) especially in the entorhinal cortex and temporal areas, and a significant increase in Mean Diffusivity (MD) in the hippocampus, amygdala and temporal areas. Furthermore, patients showed a reduction of fiber connectivity with the progression of the disease, especially for intra-hemispherical connections. Conclusion: This work shows the potential of the MIBCA toolbox for the study of AD, as findings were shown to be in agreement with the literature. Here, only structural changes and beta-amyloid accumulation were considered. Yet, MIBCA is further able to
Hammer, Daniel X.; Ferguson, R. D.; Patel, Ankit H.; Iftimia, Nicusor V.; Mujat, Mircea; Husain, Deeba
2009-02-01
Subretinal neovascular membranes (SRNM) are a deleterious complication of laser eye injury and retinal diseases such as age-related macular degeneration (AMD), choroiditis, and myopic retinopathy. Photodynamic therapy (PDT) and anti-vascular endothelial growth factor (VEGF) drugs are approved treatment methods. PDT acts by selective dye accumulation, activation by laser light, and disruption and clotting of the new leaky vessels. However, PDT surgery is currently not image-guided, nor does it proceed in an efficient or automated manner. This may contribute to the high rate of re-treatment. We have developed a multimodal scanning laser ophthalmoscope (SLO) for automated diagnosis and image-guided treatment of SRNMs associated with AMD. The system combines line scanning laser ophthalmoscopy (LSLO), fluorescein angiography (FA), indocyanine green angiography (ICGA), PDT laser delivery, and retinal tracking in a compact, efficient platform. This paper describes the system hardware and software design, performance characterization, and automated patient imaging and treatment session procedures and algorithms. Also, we present initial imaging and tracking measurements on normal subjects and automated lesion demarcation and sizing analysis of previously acquired angiograms. Future pre-clinical testing includes line scanning angiography and PDT treatment of AMD subjects. The automated acquisition procedure, enhanced and expedited data post-processing, and innovative image visualization and interpretation tools provided by the multimodal retinal imager may eventually aid in the diagnosis, treatment, and prognosis of AMD and other retinal diseases.
A method based on IHS cylindrical transform model for quality assessment of image fusion
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
Fusion of multispectral and panchromatic images using multirate filter banks
Institute of Scientific and Technical Information of China (English)
Wang Hong; Jing Zhongliang; Li Jianxun
2005-01-01
In this paper, an image fusion method based on the filter banks is proposed for merging a high-resolution panchromatic image and a low-resolution multispectral image. Firstly, the filter banks are designed to merge different signals with minimum distortion by using cosine modulation. Then, the filter banks-based image fusion is adopted to obtain a high-resolution multispectral image that combines the spectral characteristic of low-resolution data with the spatial resolution of the panchromatic image. Finally, two different experiments and corresponding performance analysis are presented. Experimental results indicate that the proposed approach outperforms the HIS transform, discrete wavelet transform and discrete wavelet frame.
Energy Technology Data Exchange (ETDEWEB)
Kamran, Mudassar, E-mail: kamranm@mir.wustl.edu; Fowler, Kathryn J., E-mail: fowlerk@mir.wustl.edu; Mellnick, Vincent M., E-mail: mellnickv@mir.wustl.edu [Washington University School of Medicine, Mallinckrodt Institute of Radiology (United States); Sicard, Gregorio A., E-mail: sicard@wudosis.wustl.edu [Washington University School of Medicine, Department of Surgery (United States); Narra, Vamsi R., E-mail: narrav@mir.wustl.edu [Washington University School of Medicine, Mallinckrodt Institute of Radiology (United States)
2016-06-15
Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented to display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.
Advances in fusion of PET, SPET, CT und MRT images
International Nuclear Information System (INIS)
Pietrzyk, U.
2003-01-01
Image fusion as part of the correlative analysis for medical images has gained ever more interest and the fact that combined systems for PET and CT are commercially available demonstrates the importance for medical diagnostics, therapy and research oriented applications. In this work the basics of image registration, its different strategies and the mathematical and physical background are described. A successful image registration is an essential prerequisite for the next steps, namely correlative medical image analysis. Means to verify image registration and the different modes for integrated display are presented and its usefulness is discussed. Possible limitations in applying image fusion in order to avoid misinterpretation will be pointed out. (orig.) [de
Fan, Quli; Cheng, Kai; Hu, Xiang; Ma, Xiaowei; Zhang, Ruiping; Yang, Min; Lu, Xiaomei; Xing, Lei; Huang, Wei; Gambhir, Sanjiv Sam; Cheng, Zhen
2014-01-01
Developing multifunctional and easily prepared nanoplatforms with integrated different modalities is highly challenging for molecular imaging. Here, we report the successful transfer of an important molecular target, melanin, into a novel multimodality imaging nanoplatform. Melanin is abundantly expressed in melanotic melanomas and thus has been actively studied as a target for melanoma imaging. In our work, the multifunctional biopolymer nanoplatform based on ultrasmall (
Data fusion of Landsat TM and IRS images in forest classification
Guangxing Wang; Markus Holopainen; Eero Lukkarinen
2000-01-01
Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...
STRUCTURAL AND FUNCTIONAL CHARACTERIZATION OF BENIGN FLECK RETINA USING MULTIMODAL IMAGING.
Neriyanuri, Srividya; Rao, Chetan; Raman, Rajiv
2017-01-01
To report structural and functional features in a case series of benign fleck retina using multimodal imaging. Four cases with benign fleck retina underwent complete ophthalmic examination that included detailed history, visual acuity, and refractive error testing, FM-100 hue test, dilated fundus evaluation, full field electroretinogram, fundus photography with autofluorescence, fundus fluorescein angiography, and swept-source optical coherence tomography. Age group of the cases ranged from 19 years to 35 years (3 males and 1 female). Parental consanguinity was reported in two cases. All of them were visually asymptomatic with best-corrected visual acuity of 20/20 (moderate astigmatism) in both the eyes. Low color discrimination was seen in two cases. Fundus photography showed pisciform flecks which were compactly placed on posterior pole and were discrete, diverging towards periphery. Lesions were seen as smaller dots within 1500 microns from fovea and were hyperfluorescent on autofluorescence. Palisading retinal pigment epithelium defects were seen in posterior pole on fundus fluorescein angiography imaging; irregular hyper fluorescence was also noted. One case had reduced cone responses on full field electroretinogram; the other three cases had normal electroretinogram. On optical coherence tomography, level of lesions varied from retinal pigment epithelium, inner segment to outer segment extending till external limiting membrane. Functional and structural deficits in benign fleck retina were picked up using multimodal imaging.
Spatiotemporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition
DEFF Research Database (Denmark)
Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc
2015-01-01
facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...
Improved detection probability of low level light and infrared image fusion system
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
Color image guided depth image super resolution using fusion filter
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
Energy Technology Data Exchange (ETDEWEB)
Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros; Deschamps, Frederic [Gustave Roussy - Cancer Campus, Interventional Radiology Department (France); Petrover, David [Imagerie Médicale Paris Centre, IMPC (France); Baere, Thierry De [Gustave Roussy - Cancer Campus, Interventional Radiology Department (France)
2017-05-15
PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time required for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.
Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing
2018-04-01
Alzheimer's disease (AD) is a major neurodegenerative disease and the most common cause of dementia. Currently, no treatment exists to slow down or stop the progression of AD. There is converging belief that disease-modifying treatments should focus on early stages of the disease, that is, the mild cognitive impairment (MCI) and preclinical stages. Making a diagnosis of AD and offering a prognosis (likelihood of converting to AD) at these early stages are challenging tasks but possible with the help of multimodality imaging, such as magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG)-positron emission topography (PET), amyloid-PET, and recently introduced tau-PET, which provides different but complementary information. This article is a focused review of existing research in the recent decade that used statistical machine learning and artificial intelligence methods to perform quantitative analysis of multimodality image data for diagnosis and prognosis of AD at the MCI or preclinical stages. We review the existing work in 3 subareas: diagnosis, prognosis, and methods for handling modality-wise missing data-a commonly encountered problem when using multimodality imaging for prediction or classification. Factors contributing to missing data include lack of imaging equipment, cost, difficulty of obtaining patient consent, and patient drop-off (in longitudinal studies). Finally, we summarize our major findings and provide some recommendations for potential future research directions. Copyright © 2018 Elsevier Inc. All rights reserved.
Cuticular Drusen: Clinical Phenotypes and Natural History Defined Using Multimodal Imaging.
Balaratnasingam, Chandrakumar; Cherepanoff, Svetlana; Dolz-Marco, Rosa; Killingsworth, Murray; Chen, Fred K; Mendis, Randev; Mrejen, Sarah; Too, Lay Khoon; Gal-Or, Orly; Curcio, Christine A; Freund, K Bailey; Yannuzzi, Lawrence A
2018-01-01
To define the range and life cycles of cuticular drusen phenotypes using multimodal imaging and to review the histologic characteristics of cuticular drusen. Retrospective, observational cohort study and experimental laboratory study. Two hundred forty eyes of 120 clinic patients with a cuticular drusen phenotype and 4 human donor eyes with cuticular drusen (n = 2), soft drusen (n = 1), and hard drusen (n = 1). We performed a retrospective review of clinical and multimodal imaging data of patients with a cuticular drusen phenotype. Patients had undergone imaging with various combinations of color photography, fluorescein angiography, indocyanine green angiography, near-infrared reflectance, fundus autofluorescence, high-resolution OCT, and ultrawide-field imaging. Human donor eyes underwent processing for high-resolution light and electron microscopy. Appearance of cuticular drusen in multimodal imaging and the topography of a cuticular drusen distribution; age-dependent variations in cuticular drusen phenotypes, including the occurrence of retinal pigment epithelium (RPE) abnormalities, choroidal neovascularization, acquired vitelliform lesions (AVLs), and geographic atrophy (GA); and ultrastructural and staining characteristics of druse subtypes. The mean age of patients at the first visit was 57.9±13.4 years. Drusen and RPE changes were seen in the peripheral retina, anterior to the vortex veins, in 21.8% of eyes. Of eyes with more than 5 years of follow-up, cuticular drusen disappeared from view in 58.3% of eyes, drusen coalescence was seen in 70.8% of eyes, and new RPE pigmentary changes developed in 56.2% of eyes. Retinal pigment epithelium abnormalities, AVLs, neovascularization, and GA occurred at a frequency of 47.5%, 24.2%, 12.5%, and 25%, respectively, and were significantly more common in patients older than 60 years of age (all P < 0.015). Occurrence of GA and neovascularization were important determinants of final visual acuity in eyes with the
Fusion of Images from Dissimilar Sensor Systems
National Research Council Canada - National Science Library
Chow, Khin
2004-01-01
Different sensors exploit different regions of the electromagnetic spectrum; therefore a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit...
Yokoya, Naoto; Ghamisi, Pedram; Xia, Junshi; Sukhanov, Sergey; Heremans, Roel; Tankoyeu, Ivan; Bechtel, Benjamin; Saux, Le Bertrand; Moser, Gabriele; Tuia, Devis
2018-01-01
In this paper, we present the scientific outcomes of the 2017 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society. The 2017 Contest was aimed at addressing the problem of local climate zones classification based on
Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel
2008-03-01
Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Multi-modality imaging of tumor phenotype and response to therapy
Nyflot, Matthew J.
2011-12-01
Imaging and radiation oncology have historically been closely linked. However, the vast majority of techniques used in the clinic involve anatomical imaging. Biological imaging offers the potential for innovation in the areas of cancer diagnosis and staging, radiotherapy target definition, and treatment response assessment. Some relevant imaging techniques are FDG PET (for imaging cellular metabolism), FLT PET (proliferation), CuATSM PET (hypoxia), and contrast-enhanced CT (vasculature and perfusion). Here, a technique for quantitative spatial correlation of tumor phenotype is presented for FDG PET, FLT PET, and CuATSM PET images. Additionally, multimodality imaging of treatment response with FLT PET, CuATSM, and dynamic contrast-enhanced CT is presented, in a trial of patients receiving an antiangiogenic agent (Avastin) combined with cisplatin and radiotherapy. Results are also presented for translational applications in animal models, including quantitative assessment of proliferative response to cetuximab with FLT PET and quantification of vascular volume with a blood-pool contrast agent (Fenestra). These techniques have clear applications to radiobiological research and optimized treatment strategies, and may eventually be used for personalized therapy for patients.
Fusion of imaging and nonimaging data for surveillance aircraft
Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre
1997-06-01
This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).
Diffusion Maps for Multimodal Registration
Directory of Open Access Journals (Sweden)
Gemma Piella
2014-06-01
Full Text Available Multimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information.
McLoughlin, L C; Inder, S; Moran, D; O'Rourke, C; Manecksha, R P; Lynch, T H
2018-02-01
The diagnostic evaluation of a PSA recurrence after RP in the Irish hospital setting involves multimodality imaging with MRI, CT, and bone scanning, despite the low diagnostic yield from imaging at low PSA levels. We aim to investigate the value of multimodality imaging in PC patients after RP with a PSA recurrence. Forty-eight patients with a PSA recurrence after RP who underwent multimodality imaging were evaluated. Demographic data, postoperative PSA levels, and imaging studies performed at those levels were evaluated. Eight (21%) MRIs, 6 (33%) CTs, and 4 (9%) bone scans had PCa-specific findings. Three (12%) patients had a positive MRI with a PSA PSA ≥1.1 ng/ml (p = 0.05). Zero patient had a positive CT TAP at a PSA level PSA levels PSA levels PSA levels ≥1.1 ng/ml. MRI alone is of investigative value at PSA <1.0 ng/ml. The indication for CT, MRI, or isotope bone scanning should be carefully correlated with the clinical question and how it will affect further management.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Infrared and visible image fusion based on total variation and augmented Lagrangian.
Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi
2017-11-01
This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.
X-ray imaging in the laser-fusion program
International Nuclear Information System (INIS)
McCall, G.H.
1977-01-01
Imaging devices which are used or planned for x-ray imaging in the laser-fusion program are discussed. Resolution criteria are explained, and a suggestion is made for using the modulation transfer function as a uniform definition of resolution for these devices
Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images
Miri, Mohammad Saleh; Lee, Kyungmoo; Niemeijer, Meindert; Abràmoff, Michael D.; Kwon, Young H.; Garvin, Mona K.
2013-03-01
Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion (by subject). A significant improvement in classification accuracy is obtained using the multimodal approach over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).
Federici, Antoine; Aknoun, Sherazade; Savatier, Julien; Wattellier, Benoit F.
2017-02-01
Quadriwave lateral shearing interferometry (QWLSI) is a well-established quantitative phase imaging (QPI) technique based on the analysis of interference patterns of four diffraction orders by an optical grating set in front of an array detector [1]. As a QPI modality, this is a non-invasive imaging technique which allow to measure the optical path difference (OPD) of semi-transparent samples. We present a system enabling QWLSI with high-performance sCMOS cameras [2] and apply it to perform high-speed imaging, low noise as well as multimodal imaging. This modified QWLSI system contains a versatile optomechanical device which images the optical grating near the detector plane. Such a device is coupled with any kind of camera by varying its magnification. In this paper, we study the use of a sCMOS Zyla5.5 camera from Andor along with our modified QWLSI system. We will present high-speed live cell imaging, up to 200Hz frame rate, in order to follow intracellular fast motions while measuring the quantitative phase information. The structural and density information extracted from the OPD signal is complementary to the specific and localized fluorescence signal [2]. In addition, QPI detects cells even when the fluorophore is not expressed. This is very useful to follow a protein expression with time. The 10 µm spatial pixel resolution of our modified QWLSI associated to the high sensitivity of the Zyla5.5 enabling to perform high quality fluorescence imaging, we have carried out multimodal imaging revealing fine structures cells, like actin filaments, merged with the morphological information of the phase. References [1]. P. Bon, G. Maucort, B. Wattellier, and S. Monneret, "Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells," Opt. Express, vol. 17, pp. 13080-13094, 2009. [2] P. Bon, S. Lécart, E. Fort and S. Lévêque-Fort, "Fast label-free cytoskeletal network imaging in living mammalian cells," Biophysical journal, 106
The clinical utility of multimodal MR image-guided needle biopsy in cerebral gliomas.
Yao, Chengjun; Lv, Shunzeng; Chen, Hong; Tang, Weijun; Guo, Jun; Zhuang, Dongxiao; Chrisochoides, Nikos; Wu, Jinsong; Mao, Ying; Zhou, Liangfu
2016-01-01
Our aim was to evaluate the diagnostic value of multimodal Magnetic Resonance (MR) Image in the stereotactic biopsy of cerebral gliomas, and investigate its implications. Twenty-four patients with cerebral gliomas underwent (1)H Magnetic Resonance Spectroscopy ((1)H-MRS)- and intraoperative Magnetic Resonance Imaging (iMRI)-supported stereotactic biopsy, and 23 patients underwent only the preoperative MRI-guided biopsy. The diagnostic yield, morbidity and mortality rates were analyzed. In addition, 20 patients underwent subsequent tumor resection, thus the diagnostic accuracy of the biopsy was further evaluated. The diagnostic accuracies of biopsies evaluated by tumor resection in the trial groups were better than control groups (92.3% and 42.9%, respectively, p = 0.031). The diagnostic yield in the trial groups was better than the control groups, but the difference was not statistically significant (100% and 82.6%, respectively, p = 0.05). The morbidity and mortality rates were similar in both groups. Multimodal MR image-guided glioma biopsy is practical and valuable. This technique can increase the diagnostic accuracy in the stereotactic biopsy of cerebral gliomas. Besides, it is likely to increase the diagnostic yield but requires further validation.
Fan, Quli; Cheng, Kai; Hu, Xiang; Ma, Xiaowei; Zhang, Ruiping; Yang, Min; Lu, Xiaomei; Xing, Lei; Huang, Wei; Gambhir, Sanjiv Sam; Cheng, Zhen
2014-10-29
Developing multifunctional and easily prepared nanoplatforms with integrated different modalities is highly challenging for molecular imaging. Here, we report the successful transfer of an important molecular target, melanin, into a novel multimodality imaging nanoplatform. Melanin is abundantly expressed in melanotic melanomas and thus has been actively studied as a target for melanoma imaging. In our work, the multifunctional biopolymer nanoplatform based on ultrasmall (passive nanoplatforms require complicated and time-consuming processes for prebuilding reporting moieties or chemical modifications using active groups to integrate different contrast properties into one entity. In comparison, utilizing functional biomarker melanin can greatly simplify the building process. We further conjugated αvβ3 integrins, cyclic c(RGDfC) peptide, to MNPs to allow for U87MG tumor accumulation due to its targeting property combined with the enhanced permeability and retention (EPR) effect. The multimodal properties of MNPs demonstrate the high potential of endogenous materials with multifunctions as nanoplatforms for molecular theranostics and clinical translation.
Xu, Huan; Cheng, Liang; Wang, Chao; Ma, Xinxing; Li, Yonggang; Liu, Zhuang
2011-12-01
Multimodal imaging and imaging-guided therapies have become a new trend in the current development of cancer theranostics. In this work, we encapsulate hydrophobic upconversion nanoparticles (UCNPs) together with iron oxide nanoparticles (IONPs) by using an amphiphilic block copolymer, poly (styrene-block-allyl alcohol) (PS(16)-b-PAA(10)), via a microemulsion method, obtaining an UC-IO@Polymer multi-functional nanocomposite system. Fluorescent dye and anti-cancer drug molecules can be further loaded inside the UC-IO@Polymer nanocomposite for additional functionalities. Utilizing the Squaraine (SQ) dye loaded nanocomposite (UC-IO@Polymer-SQ), triple-modal upconversion luminescence (UCL)/down-conversion fluorescence (FL)/magnetic resonance (MR) imaging is demonstrated in vitro and in vivo, and also applied for in vivo cancer cell tracking in mice. On the other hand, a chemotherapy drug, doxorubicin, is also loaded into the nanocomposite, forming an UC-IO@Polymer-DOX complex, which enables novel imaging-guided and magnetic targeted drug delivery. Our work provides a method to fabricate a nanocomposite system with highly integrated functionalities for multimodal biomedical imaging and cancer therapy. Copyright © 2011 Elsevier Ltd. All rights reserved.
Rouffiac, Valérie; Ser-Leroux, Karine; Dugon, Emilie; Leguerney, Ingrid; Polrot, Mélanie; Robin, Sandra; Salomé-Desnoulez, Sophie; Ginefri, Jean-Christophe; Sebrié, Catherine; Laplace-Builhé, Corinne
2015-03-01
In vivo high-resolution imaging of tumor development is possible through dorsal skinfold chamber implantable on mice model. However, current intravital imaging systems are weakly tolerated along time by mice and do not allow multimodality imaging. Our project aims to develop a new chamber for: 1- long-term micro/macroscopic visualization of tumor (vascular and cellular compartments) and tissue microenvironment; and 2- multimodality imaging (photonic, MRI and sonography). Our new experimental device was patented in March 2014 and was primarily assessed on 75 mouse engrafted with 4T1-Luc tumor cell line, and validated in confocal and multiphoton imaging after staining the mice vasculature using Dextran 155KDa-TRITC or Dextran 2000kDa-FITC. Simultaneously, a universal stage was designed for optimal removal of respiratory and cardiac artifacts during microscopy assays. Experimental results from optical, ultrasound (Bmode and pulse subtraction mode) and MRI imaging (anatomic sequences) showed that our patented design, unlike commercial devices, improves longitudinal monitoring over several weeks (35 days on average against 12 for the commercial chamber) and allows for a better characterization of the early and late tissue alterations due to tumour development. We also demonstrated the compatibility for multimodality imaging and the increase of mice survival was by a factor of 2.9, with our new skinfold chamber. Current developments include: 1- defining new procedures for multi-labelling of cells and tissue (screening of fluorescent molecules and imaging protocols); 2- developing ultrasound and MRI imaging procedures with specific probes; 3- correlating optical/ultrasound/MRI data for a complete mapping of tumour development and microenvironment.
International Nuclear Information System (INIS)
Gabriel, Michael; Hausler, Florian; Moncayo, Roy; Decristoforo, Clemens; Virgolini, Irene; Bale, Reto; Kovacs, Peter
2005-01-01
The aim of this study was to assess the value of multimodality imaging using a novel repositioning device with external markers for fusion of single-photon emission computed tomography (SPECT) and computed tomography (CT) images. The additional benefit derived from this methodological approach was analysed in comparison with SPECT and diagnostic CT alone in terms of detection rate, reliability and anatomical assignment of abnormal findings with SPECT. Fifty-three patients (30 males, 23 females) with known or suspected endocrine tumours were studied. Clinical indications for somatostatin receptor (SSTR) scintigraphy (SPECT/CT image fusion) included staging of newly diagnosed tumours (n=14) and detection of unknown primary tumour in the presence of clinical and/or biochemical suspicion of neuroendocrine malignancy (n=20). Follow-up studies after therapy were performed in 19 patients. A mean activity of 400 MBq of 99m Tc-EDDA/HYNIC-Tyr 3 -octreotide was given intravenously. SPECT using a dual-detector scintillation camera and diagnostic multi-detector CT were sequentially performed. To ensure reproducible positioning, patients were fixed in an individualised vacuum mattress with modality-specific external markers for co-registration. SPECT and CT data were initially interpreted separately and the fused images were interpreted jointly in consensus by nuclear medicine and diagnostic radiology physicians. SPECT was true-positive (TP) in 18 patients, true-negative (TN) in 16, false-negative (FN) in ten and false-positive (FP) in nine; CT was TP in 18 patients, TN in 21, FP in ten and FN in four. With image fusion (SPECT and CT), the scan result was TP in 27 patients (50.9%), TN in 25 patients (47.2%) and FN in one patient, this FN result being caused by multiple small liver metastases; sensitivity was 95% and specificity, 100%. The difference between SPECT and SPECT/CT was statistically as significant as the difference between CT and SPECT/CT image fusion (P<0
3D Image Fusion to Localise Intercostal Arteries During TEVAR
Directory of Open Access Journals (Sweden)
G. Koutouzi
Full Text Available Purpose: Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA, but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR. Technique: The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT, the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. Results: 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. Conclusion: 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia. Keywords: TEVAR, Intercostal artery, Spinal cord ischaemia, 3D image fusion, Image guidance, Cone-beam CT
Operative and economic evaluation of a 'Laser Printer Multimodality' System
International Nuclear Information System (INIS)
Battaglia, G.; Moscatelli, G.; Maroldi, R.; Chiesa, A.
1991-01-01
The increasing application of digital techniques to diagnostic imaging is causing significant changes in several related activities, such as a reproduction of digital images on film. In the Department of Diagnostic Imaging of the University of Brescia, about 70% of the whole of images are produced by digital techniques; at present, most of these images are reproduced on film with a Multimodality System interfacing CT, MR, DSA, and DR units with a single laser printer. Our analysis evaluates the operative and economics aspects of image reproduction, by comparing the 'single cassette' multiformat Camera and the Laser Printer Multimodality SAystem. Our results point out the advantages obtained by reproducing images with a Laser Printer Multimodality System: outstanding quality, reproduction of multiple originals, and marked reduction in the time needed for both image archiving and film handling. The Laser Printer Multimodality System allows over 5 hours/day to be saved -that is to say the working day of an operator, who can be thus shifted to other functions. The important economic aspect of the reproduction of digital images on film proves the Laser Printer Multimodality System to have some advantage over Cameras
Toet, A.; Hogervorst, M.A.
2003-01-01
We applied a recently introduced universal image quality index Q that quantifies the distortion of a processed image relative to its original version, to assess the performance of different graylevel image fusion schemes. The method is as follows. First, we adopt an original test image as the
Quality Assurance of Serial 3D Image Registration, Fusion, and Segmentation
International Nuclear Information System (INIS)
Sharpe, Michael; Brock, Kristy K.
2008-01-01
Radiotherapy relies on images to plan, guide, and assess treatment. Image registration, fusion, and segmentation are integral to these processes; specifically for aiding anatomic delineation, assessing organ motion, and aligning targets with treatment beams in image-guided radiation therapy (IGRT). Future developments in image registration will also improve estimations of the actual dose delivered and quantitative assessment in patient follow-up exams. This article summarizes common and emerging technologies and reviews the role of image registration, fusion, and segmentation in radiotherapy processes. The current quality assurance practices are summarized, and implications for clinical procedures are discussed
International Nuclear Information System (INIS)
Allouni, A.K.; Davis, W.; Mankad, K.; Rankine, J.; Davagnanam, I.
2013-01-01
Radiologists frequently encounter studies demonstrating spinal instrumentation, either as part of the patient's postoperative evaluation, or as incidental to a study performed for another purpose. It is important for the reporting radiologist to identify potential complications of commonly used spinal implants. Part 1 of this review examined both the surgical approaches used and the normal appearances of these spinal implants and bone grafting techniques. This second part of the review will focus on the multimodal imaging strategy adopted in the assessment of the instrumented spine and the demonstration of imaging findings of common postoperative complications.
Improving Accuracy for Image Fusion in Abdominal Ultrasonography
Directory of Open Access Journals (Sweden)
Caroline Ewertsen
2012-08-01
Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.
International Nuclear Information System (INIS)
Tae, Seong Ho; Vu, Nguyen H.; Jung, Young Yeon; Min, Jung Joon
2007-01-01
Magnetospirillum magneticum AMB-1 synthesize uniform, nano-sized magnetite (Fe3O4) particles, which are referred to as bacterial magnetic particles (BacMPs). BacMPs have potential for various technological applications and the molecular mechanism of their formation is of particular interest. In this study, we established the culture method for M. magneticum AMB-1 and analysed it's growth property and magnetic resonance image. Magnetospirillum magneticum AMB-1 strain was obtained from ATCC and inoculated in Magnetospirillum growth medium (MSGM). M. magneticum was cultured at 26? with 60 rpm shaking and check the optical density (OD) in 600 nm every 6 hours. Cultured M. magneticum that reached to stataionary phase was collected by centrifugation and suspend in PBS. MR image was taken by 1.5T MRI machine. The growth of M. magneticum was reached up to 0.2 OD600 at 80 hours after inoculation. The bacterial suspension was made the concentration 2 X 10-11 CFU/ml and successfully taken MR image using by 1.5T MRI machine. M. magneticum AMB strain was successfully cultured in our laboratory condition and was shown intensive MR image. Now we can use this bacteria as a multimodal image vector if the M. magneticum is transformed with an bioluminescent or fluorescent reporter gene. Further study about the development of M. magneticum strain as a multimodal image is needed
Shrestha, Sebina; Serafino, Michael J; Rico-Jimenez, Jesus; Park, Jesung; Chen, Xi; Zhaorigetu, Siqin; Walton, Brian L; Jo, Javier A; Applegate, Brian E
2016-09-01
Multimodal imaging probes a variety of tissue properties in a single image acquisition by merging complimentary imaging technologies. Exploiting synergies amongst the data, algorithms can be developed that lead to better tissue characterization than could be accomplished by the constituent imaging modalities taken alone. The combination of optical coherence tomography (OCT) with fluorescence lifetime imaging microscopy (FLIM) provides access to detailed tissue morphology and local biochemistry. The optical system described here merges 1310 nm swept-source OCT with time-domain FLIM having excitation at 355 and 532 nm. The pulses from 355 and 532 nm lasers have been interleaved to enable simultaneous acquisition of endogenous and exogenous fluorescence signals, respectively. The multimodal imaging system was validated using tissue phantoms. Nonspecific tagging with Alexa Flour 532 in a Watanbe rabbit aorta and active tagging of the LOX-1 receptor in human coronary artery, demonstrate the capacity of the system for simultaneous acquisition of OCT, endogenous FLIM, and exogenous FLIM in tissues.
Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.
Gao, Duyang; Yuan, Zhen
2017-01-01
Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.
International Nuclear Information System (INIS)
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-01-01
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
Energy Technology Data Exchange (ETDEWEB)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja [Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, Uttar Pradesh 226028 (India); Bao, Le Nguyen [Duytan University, Danang 550000 (Viet Nam); Lay-Ekuakille, Aimé [Department of Innovation Engineering, University of Salento, Lecce 73100 (Italy); Le, Dac-Nhuong, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn [Duytan University, Danang 550000 (Viet Nam); Haiphong University, Haiphong 180000 (Viet Nam)
2016-07-15
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
Directory of Open Access Journals (Sweden)
Atefeh Shirvani
2017-01-01
Full Text Available Background: In radiation therapy, computed tomography (CT simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P 4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Directory of Open Access Journals (Sweden)
Md. Rabiul Islam
2014-01-01
Full Text Available The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Novelty detection of foreign objects in food using multi-modal X-ray imaging
DEFF Research Database (Denmark)
Einarsdottir, Hildur; Emerson, Monica Jane; Clemmensen, Line Katrine Harder
2016-01-01
In this paper we demonstrate a method for novelty detection of foreign objects in food products using grating-based multimodal X-ray imaging. With this imaging technique three modalities are available with pixel correspondence, enhancing organic materials such as wood chips, insects and soft...... plastics not detectable by conventional X-ray absorption radiography. We conduct experiments, where several food products are imaged with common foreign objects typically found in the food processing industry. To evaluate the benefit from using this multi-contrast X-ray technique over conventional X......-ray absorption imaging, a novelty detection scheme based on well known image- and statistical analysis techniques is proposed. The results show that the presented method gives superior recognition results and highlights the advantage of grating-based imaging....
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study
Directory of Open Access Journals (Sweden)
Angel D. Sappa
2016-06-01
Full Text Available This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR and Long Wave InfraRed (LWIR.
AMIDE: A Free Software Tool for Multimodality Medical Image Analysis
Directory of Open Access Journals (Sweden)
Andreas Markus Loening
2003-07-01
Full Text Available Amide's a Medical Image Data Examiner (AMIDE has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.
Novakovic, Dunja; Saarinen, Jukka; Rojalin, Tatu; Antikainen, Osmo; Fraser-Miller, Sara J; Laaksonen, Timo; Peltonen, Leena; Isomäki, Antti; Strachan, Clare J
2017-11-07
Two nonlinear imaging modalities, coherent anti-Stokes Raman scattering (CARS) and sum-frequency generation (SFG), were successfully combined for sensitive multimodal imaging of multiple solid-state forms and their changes on drug tablet surfaces. Two imaging approaches were used and compared: (i) hyperspectral CARS combined with principal component analysis (PCA) and SFG imaging and (ii) simultaneous narrowband CARS and SFG imaging. Three different solid-state forms of indomethacin-the crystalline gamma and alpha forms, as well as the amorphous form-were clearly distinguished using both approaches. Simultaneous narrowband CARS and SFG imaging was faster, but hyperspectral CARS and SFG imaging has the potential to be applied to a wider variety of more complex samples. These methodologies were further used to follow crystallization of indomethacin on tablet surfaces under two storage conditions: 30 °C/23% RH and 30 °C/75% RH. Imaging with (sub)micron resolution showed that the approach allowed detection of very early stage surface crystallization. The surfaces progressively crystallized to predominantly (but not exclusively) the gamma form at lower humidity and the alpha form at higher humidity. Overall, this study suggests that multimodal nonlinear imaging is a highly sensitive, solid-state (and chemically) specific, rapid, and versatile imaging technique for understanding and hence controlling (surface) solid-state forms and their complex changes in pharmaceuticals.
Image fusion for enhanced forest structural assessment
CSIR Research Space (South Africa)
Roberts, JW
2011-01-01
Full Text Available This research explores the potential benefits of fusing active and passive medium resolution satellite-borne sensor data for forest structural assessment. Image fusion was applied as a means of retaining disparate data features relevant to modeling...
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
An integrated multimodality image-guided robot system for small-animal imaging research
International Nuclear Information System (INIS)
Hsu, Wen-Lin; Hsin Wu, Tung; Hsu, Shih-Ming; Chen, Chia-Lin; Lee, Jason J.S.; Huang, Yung-Hui
2011-01-01
We design and construct an image-guided robot system for use in small-animal imaging research. This device allows the use of co-registered small-animal PET-MRI images to guide the movements of robotic controllers, which will accurately place a needle probe at any predetermined location inside, for example, a mouse tumor, for biological readouts without sacrificing the animal. This system is composed of three major components: an automated robot device, a CCD monitoring mechanism, and a multimodality registration implementation. Specifically, the CCD monitoring mechanism was used for correction and validation of the robot device. To demonstrate the value of the proposed system, we performed a tumor hypoxia study that involved FMISO small-animal PET imaging and the delivering of a pO 2 probe into the mouse tumor using the image-guided robot system. During our evaluation, the needle positioning error was found to be within 0.153±0.042 mm of desired placement; the phantom simulation errors were within 0.693±0.128 mm. In small-animal studies, the pO 2 probe measurements in the corresponding hypoxia areas showed good correlation with significant, low tissue oxygen tensions (less than 6 mmHg). We have confirmed the feasibility of the system and successfully applied it to small-animal investigations. The system could be easily adapted to extend to other biomedical investigations in the future.
International Nuclear Information System (INIS)
Barthel, H.; Georgi, P.; Slomka, P.; Dannenberg, C.; Kahn, T.
2000-01-01
Parkinson's disease (PD) is characterized by a degeneration of nigrostriated dopaminergic neurons, which can be imaged with 123 I-labeled 2β-carbomethoxy-3β-(4-iodophenyl) tropane ([ 123 I]β-CIT) and single-photon emission computed tomography (SPECT). However, the quality of the region of interest (ROI) technique used for quantitative analysis of SPECT data is compromised by limited anatomical information in the images. We investigated whether the diagnosis of PD can be improved by combining the use of SPECT images with morphological image data from magnetic resonance imaging (MRI)/computed tomography (CT). We examined 27 patients (8 men, 19 women; aged 55±13 years) with PD (Hoehn and Yahr stage 2.1±0.8) by high-resolution [ 123 I]β-CIT SPECT (185-200 MBq, Ceraspect camera). SPECT images were analyzed both by a unimodal technique (ROIs defined directly within the SPECT studies) and a multimodal technique (ROIs defined within individual MRI/CT studies and transferred to the corresponding interactively coregistered SPECT studies). [ 123 I]β-CIT binding ratios (cerebellum as reference), which were obtained for heads of caudate nuclei (CA), putamina (PU), and global striatal structures were compared with clinical parameters. Differences between contra- and ipsilateral (related to symptom dominance) striatal [ 123 I]β-CIT binding ratios proved to be larger in the multimodal ROI technique than in the unimodal approach (e.g., for PU: 1.2*** vs. 0.7**). Binding ratios obtained by the unimodal ROI technique were significantly correlated with those of the multimodal technique (e.g., for CA: y=0.97x+2.8; r=0.70; P com subscore (r=-0.49* vs. -0.32). These results show that the impact of [ 123 I]β-CIT SPECT for diagnosing PD is affected by the method used to analyze the SPECT images. The described multimodal approach, which is based on coregistration of SPECT and morphological imaging data, leads to improved determination of the degree of this dopaminergic disorder
International Nuclear Information System (INIS)
Bischof Delaloye, Angelika; Carrio, Ignasi; Cuocolo, Alberto; Knapp, Wolfram; Gourtsoyiannis, Nicholas; McCall, Iain; Reiser, Maximilian; Silberman, Bruno
2007-01-01
New multimodality imaging systems bring together anatomical and molecular information and require the competency and accreditation of individuals from both nuclear medicine and radiology. This paper sets out the positions and aspirations of the European Association of Nuclear Medicine (EANM) and the European Society of Radiology (ESR) working together on an equal and constructive basis for the future benefit of both specialties. EANM and ESR recognise the importance of coordinating working practices for multimodality imaging systems and that undertaking the nuclear medicine and radiology components of imaging with hybrid systems requires different skills. It is important to provide adequate and appropriate training in the two disciplines in order to offer a proper service to the patient using hybrid systems. Training models are proposed with the overall objective of providing opportunities for acquisition of special competency certification in multimodality imaging. Both organisations plan to develop common procedural guidelines and recognise the importance of coordinating the purchasing and management of hybrid systems to maximise the benefits to both specialties and to ensure appropriate reimbursement of these examinations. European multimodality imaging research is operating in a highly competitive environment. The coming years will decide whether European research in this area manages to defend its leading position or whether it falls behind research in other leading economies. Since research teams in the Member States are not always sufficiently interconnected, more European input is necessary to create interdisciplinary bridges between research institutions in Europe and to stimulate excellence. EANM and ESR will work with the European Institute for Biomedical Imaging Research (EIBIR) to develop further research opportunities across Europe. European Union grant-funding bodies should allocate funds to joint research initiatives that encompass clinical research
International Nuclear Information System (INIS)
Gourtsoyiannis, Nicholas; McCall, Iain; Reiser, Maximilian; Silberman, Bruno; Bischof Delaloye, Angelika; Carrio, Ignacio; Cuocolo, Alberto; Knapp, Wolfram
2007-01-01
New multimodality imaging systems bring together anatomical and molecular information and require the competency and accreditation of individuals from both radiology and nuclear medicine. This paper sets out the positions and aspirations of the European Society of Radiology (ESR) and the European Association of Nuclear Medicine (EANM) working together on an equal and constructive basis for the future benefit of both specialties. ESR and EANM recognise the importance of coordinating working practices for multimodality imaging systems and that undertaking the radiology and nuclear medicine components of imaging with hybrid systems requires different skills. It is important to provide adequate and appropriate training in the two disciplines in order to offer a proper service to the patient using hybrid systems. Training models are proposed with the overall objective of providing opportunities for acquisition of special competency certification in multimodality imaging. Both organisations plan to develop common procedural guidelines and recognise the importance of coordinating the purchasing and management of hybrid systems to maximise the benefits to both specialties and to ensure appropriate reimbursement of these examinations. European multimodality imaging research is operating in a highly competitive environment. The coming years will decide whether European research in this area manages to defend its leading position or whether it falls behind research in other leading economies. Since research teams in the member states are not always sufficiently interconnected, more European input is necessary to create interdisciplinary bridges between research institutions in Europe and to stimulate excellence. ESR and EANM will work with the European Institute for Biomedical Imaging Research (EIBIR) to develop further research opportunities across Europe. European Union grant-funding bodies should allocate funds to joint research initiatives that encompass clinical research
FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES
Directory of Open Access Journals (Sweden)
J. Zhao
2017-09-01
Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.
Directory of Open Access Journals (Sweden)
Chantal Scheepbouwer
Full Text Available Bladder cancer is the fourth most common malignancy amongst men in Western industrialized countries with an initial response rate of 70% for the non-muscle invasive type, and improving therapy efficacy is highly needed. For this, an appropriate, reliable animal model is essential to gain insight into mechanisms of tumor growth for use in response monitoring of (new agents. Several animal models have been described in previous studies, but so far success has been hampered due to the absence of imaging methods to follow tumor growth non-invasively over time. Recent developments of multimodal imaging methods for use in animal research have substantially strengthened these options of in vivo visualization of tumor growth. In the present study, a multimodal imaging approach was addressed to investigate bladder tumor proliferation longitudinally. The complementary abilities of Bioluminescence, High Resolution Ultrasound and Photo-acoustic Imaging permit a better understanding of bladder tumor development. Hybrid imaging modalities allow the integration of individual strengths to enable sensitive and improved quantification and understanding of tumor biology, and ultimately, can aid in the discovery and development of new therapeutics.
IMPROVING THE QUALITY OF NEAR-INFRARED IMAGING OF IN VIVOBLOOD VESSELS USING IMAGE FUSION METHODS
DEFF Research Database (Denmark)
Jensen, Andreas Kryger; Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard
2009-01-01
We investigate methods for improving the visual quality of in vivo images of blood vessels in the human forearm. Using a near-infrared light source and a dual CCD chip camera system capable of capturing images at visual and nearinfrared spectra, we evaluate three fusion methods in terms...... of their capability of enhancing the blood vessels while preserving the spectral signature of the original color image. Furthermore, we investigate a possibility of removing hair in the images using a fusion rule based on the "a trous" stationary wavelet decomposition. The method with the best overall performance...... with both speed and quality in mind is the Intensity Injection method. Using the developed system and the methods presented in this article, it is possible to create images of high visual quality with highly emphasized blood vessels....
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
International Nuclear Information System (INIS)
Servois, V.; El Khoury, C.; Lantoine, A.; Ollivier, L.; Neuenschwander, S.; Chauveinc, L.; Cosset, J.M.; Flam, T.; Rosenwald, J.C.
2003-01-01
To study different methods of CT and MR images fusion in patient treated by brachytherapy for localized prostate cancer. To compare the results of the dosimetric study realized on CT slices and images fusion. Fourteen cases of patients treated by 1125 were retrospectively studied. The CT examinations were realized with continuous section of 5 mm thickness, and MR images were obtained with a surface coil with contiguous section of 3 mm thickness. For the images fusion process, only the T2 weighted MR sequence was used. Two processes of images fusion were realized for each patient, using as reference marks the bones of the pelvis and the implanted seeds. A quantitative and qualitative appreciation was made by the operators, for each patient and both methods of images fusion. The dosimetric study obtained by a dedicated software was realized on CT images and all types of images fusion. The usual dosimetric indexes (D90, V 100 and V 150) were compared for each type of image. The quantitative results given by the software of images fusion showed a superior accuracy to the one obtained by the pelvic bony reference marks. Conversely, qualitative and quantitative results obtained by the operators showed a better accuracy of the images fusion based on iodine seeds. For two patients out of three presenting a D90 inferior to 145 Gy on CT examination, the D90 was superior to this norm when the dosimetry was based on images fusion, whatever the method used. The images fusion method based on implanted seed matching seems to be more precise than the one using bony reference marks. The dosimetric study realized on images fusion could allow to rectify possible errors, mainly due to difficulties in surrounding prostate contour delimitation on CT images. (authors)
Classification of ADHD children through multimodal Magnetic Resonance Imaging
Directory of Open Access Journals (Sweden)
Dai eDai
2012-09-01
Full Text Available Attention deficit/hyperactivity disorder (ADHD is one of the most common diseases in school-age children. To date, the diagnosis of ADHD is mainly subjective and studies of objective diagnostic method are of great importance. Although many efforts have been made recently to investigate the use of structural and functional brain images for the diagnosis purpose, few of them are related to ADHD. In this paper, we introduce an automatic classification framework based on brain imaging features of ADHD patients, and present in detail the feature extraction, feature selection and classifier training methods. The effects of using different features are compared against each other. In addition, we integrate multimodal image features using multi-kernel learning (MKL. The performance of our framework has been validated in the ADHD-200 Global Competition, which is a world-wide classification contest on the ADHD-200 datasets. In this competition, our classification framework using features of resting-state functional connectivity was ranked the 6th out of 21 participants under the competition scoring policy, and performed the best in terms of sensitivity and J-statistic.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat
Research on multi-source image fusion technology in haze environment
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan
2015-11-01
In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.
International Nuclear Information System (INIS)
Ozge Er; Fatma Yurt Lambrecht; Kasim Ocakoglu; Cagla Kayabasi; Cumhur Gunduz
2015-01-01
In this study, the biological potential of a nickel chlorophyll derivative (Ni-PH-A) as a multimodal agent for tumor imaging and photodynamic therapy (PDT) was investigated. Optimum conditions of labeling with 131 I were investigated and determined as pH 10 and 1 mg amount of iodogen. Biodistribution results of 131 I labeled Ni-PH-A in female rats indicated that radiolabeled Ni-PH-A maximum uptake in the liver, spleen and ovary was observed at 30 min. Intercellular uptake and PDT efficacy of Ni-PH-A were better in MDAH-2774 (human ovarian endometrioid adenocarcinoma) than in MCF-7 (human breast adenocarcinoma) cells. Ni-PH-A might be a promising multimodal agent for lung, ovary and liver tumor imaging and PDT. (author)
Real-time image fusion involving diagnostic ultrasound
DEFF Research Database (Denmark)
Ewertsen, Caroline; Săftoiu, Adrian; Gruionu, Lucian G
2013-01-01
The aim of our article is to give an overview of the current and future possibilities of real-time image fusion involving ultrasound. We present a review of the existing English-language peer-reviewed literature assessing this technique, which covers technical solutions (for ultrasound...
VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.
Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro
2016-01-01
In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the
Development of a novel fusion imaging technique in the diagnosis of hepatobiliary-pancreatic lesions
International Nuclear Information System (INIS)
Soga, Koichi; Ochiai, Jun; Miyajima, Takashi; Kassai, Kyoichi; Itani, Kenji; Yagi, Nobuaki; Naito, Yuji
2013-01-01
Multi-row detector computed tomography (MDCT) and magnetic resonance cholangiopancreatography (MRCP) play an important role in the imaging diagnosis of hepatobiliary-pancreatic lesions. Here we investigated whether unifying the MDCT and MRCP images onto the same screen using fusion imaging could overcome the limitations of each technique, while still maintaining their benefits. Moreover, because reports of fusion imaging using MDCT and MRCP are rare, we assessed the benefits and limitations of this method for its potential application in a clinical setting. The patient group included 9 men and 11 women. Among the 20 patients, the final diagnoses were as follows: 10 intraductal papillary mucinous neoplasms, 5 biliary system carcinomas, 1 pancreatic adenocarcinoma and 5 non-neoplastic lesions. After transmitting the Digital Imaging and Communication in Medicine data of the MDCT and MRCP images to a workstation, we performed a 3-D organisation of both sets of images using volume rendering for the image fusion. Fusion imaging enabled clear identification of the spatial relationship between a hepatobiliary-pancreatic lesion and the solid viscera and/or vessels. Further, this method facilitated the determination of the relationship between the anatomical position of the lesion and its surroundings more easily than either MDCT or MRCP alone. Fusion imaging is an easy technique to perform and may be a useful tool for planning treatment strategies and for examining pathological changes in hepatobiliary-pancreatic lesions. Additionally, the ease of obtaining the 3-D images suggests the possibility of using these images to plan intervention strategies.
Diaz, Silvana; Soto, Javier E; Inostroza, Fabian; Godoy, Sebastian E; Figueroa, Miguel
2017-07-01
We present a heterogeneous architecture for image registration and multimodal segmentation on an embedded system for noninvasive skin cancer screening. The architecture combines Otsu thresholding and the random walker algorithm to perform image segmentation, and features a hardware implementation of the Harris corner detection algorithm to perform region-of-interest detection and image registration. Running on a Xilinx XC7Z020 reconfigurable system-on-a-chip, our prototype computes the initial segmentation of a 400×400-pixel region of interest in the visible spectrum in 12.1 seconds, and registers infrared images against this region at 540 frames per second, while consuming 1.9W.
Multi-modality image reconstruction for dual-head small-animal PET
International Nuclear Information System (INIS)
Huang, Chang-Han; Chou, Cheng-Ying
2015-01-01
The hybrid positron emission tomography/computed tomography (PET/CT) or positron emission tomography/magnetic resonance imaging (PET/MRI) has become routine practice in clinics. The applications of multi-modality imaging can also benefit research advances. Consequently, dedicated small-imaging system like dual-head small-animal PET (DHAPET) that possesses the advantages of high detection sensitivity and high resolution can exploit the structural information from CT or MRI. It should be noted that the special detector arrangement in DHAPET leads to severe data truncation, thereby degrading the image quality. We proposed to take advantage of anatomical priors and total variation (TV) minimization methods to reconstruct PET activity distribution form incomplete measurement data. The objective is to solve the penalized least-squares function consisted of data fidelity term, TV norm and medium root priors. In this work, we employed the splitting-based fast iterative shrinkage/thresholding algorithm to split smooth and non-smooth functions in the convex optimization problems. Our simulations studies validated that the images reconstructed by use of the proposed method can outperform those obtained by use of conventional expectation maximization algorithms or that without considering the anatomical prior information. Additionally, the convergence rate is also accelerated.
Joint sparse representation for robust multimodal biometrics recognition.
Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama
2014-01-01
Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.
Wang, Y.; Tobias, B.; Chang, Y.-T.; Yu, J.-H.; Li, M.; Hu, F.; Chen, M.; Mamidanna, M.; Phan, T.; Pham, A.-V.; Gu, J.; Liu, X.; Zhu, Y.; Domier, C. W.; Shi, L.; Valeo, E.; Kramer, G. J.; Kuwahara, D.; Nagayama, Y.; Mase, A.; Luhmann, N. C., Jr.
2017-07-01
Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. Microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These have the potential to greatly advance microwave fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfvén eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today’s most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.
Directory of Open Access Journals (Sweden)
Wu B
2017-06-01
Full Text Available Bo Wu,1,* Bing Wan,2,* Shu-Ting Lu,1 Kai Deng,3 Xiao-Qi Li,1 Bao-Lin Wu,1 Yu-Shuang Li,1 Ru-Fang Liao,1 Shi-Wen Huang,3 Hai-Bo Xu1,2 1Department of Radiology, Zhongnan Hospital of Wuhan University, 2Department of Radiology, Union Hospital of Tongji Medical College, Huazhong University of Science and Technology, 3Department of Chemistry, Key Laboratory of Biomedical Polymers, Ministry of Education, Wuhan University, Wuhan, People’s Republic of China *These authors contributed equally to this work Abstract: The major challenge in current clinic contrast agents (CAs and chemotherapy is the poor tumor selectivity and response. Based on the self-quench property of IR820 at high concentrations, and different contrast effect ability of Gd-DOTA between inner and outer of liposome, we developed “bomb-like” light-triggered CAs (LTCAs for enhanced CT/MRI/FI multimodal imaging, which can improve the signal-to-noise ratio of tumor tissue specifically. IR820, Iohexol and Gd-chelates were firstly encapsulated into the thermal-sensitive nanocarrier with a high concentration. This will result in protection and fluorescence quenching. Then, the release of CAs was triggered by near-infrared (NIR light laser irradiation, which will lead to fluorescence and MRI activation and enable imaging of inflammation. In vitro and in vivo experiments demonstrated that LTCAs with 808 nm laser irradiation have shorter T1 relaxation time in MRI and stronger intensity in FI compared to those without irradiation. Additionally, due to the high photothermal conversion efficiency of IR820, the injection of LTCAs was demonstrated to completely inhibit C6 tumor growth in nude mice up to 17 days after NIR laser irradiation. The results indicate that the LTCAs can serve as a promising platform for NIR-activated multimodal imaging and photothermal therapy. Keywords: light triggered, near-infrared light, tumor-specific, multimodal imaging, photothermal therapy, contrast agents
Nanodiamond Landmarks for Subcellular Multimodal Optical and Electron Imaging
Zurbuchen, Mark A.; Lake, Michael P.; Kohan, Sirus A.; Leung, Belinda; Bouchard, Louis-S.
2013-01-01
There is a growing need for biolabels that can be used in both optical and electron microscopies, are non-cytotoxic, and do not photobleach. Such biolabels could enable targeted nanoscale imaging of sub-cellular structures, and help to establish correlations between conjugation-delivered biomolecules and function. Here we demonstrate a sub-cellular multi-modal imaging methodology that enables localization of inert particulate probes, consisting of nanodiamonds having fluorescent nitrogen-vacancy centers. These are functionalized to target specific structures, and are observable by both optical and electron microscopies. Nanodiamonds targeted to the nuclear pore complex are rapidly localized in electron-microscopy diffraction mode to enable “zooming-in” to regions of interest for detailed structural investigations. Optical microscopies reveal nanodiamonds for in-vitro tracking or uptake-confirmation. The approach is general, works down to the single nanodiamond level, and can leverage the unique capabilities of nanodiamonds, such as biocompatibility, sensitive magnetometry, and gene and drug delivery. PMID:24036840
Morishima, Shigeo; Nakamura, Satoshi
2004-12-01
We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
Bianchi, S; Rajamanickam, V P; Ferrara, L; Di Fabrizio, E; Liberale, C; Di Leonardo, R
2013-12-01
The use of individual multimode optical fibers in endoscopy applications has the potential to provide highly miniaturized and noninvasive probes for microscopy and optical micromanipulation. A few different strategies have been proposed recently, but they all suffer from intrinsically low resolution related to the low numerical aperture of multimode fibers. Here, we show that two-photon polymerization allows for direct fabrication of micro-optics components on the fiber end, resulting in an increase of the numerical aperture to a value that is close to 1. Coupling light into the fiber through a spatial light modulator, we were able to optically scan a submicrometer spot (300 nm FWHM) over an extended region, facing the opposite fiber end. Fluorescence imaging with improved resolution is also demonstrated.
Live Imaging of Mouse Secondary Palate Fusion
Czech Academy of Sciences Publication Activity Database
Kim, S.; Procházka, Jan; Bush, J.O.
jaro, č. 125 (2017), č. článku e56041. ISSN 1940-087X Institutional support: RVO:68378050 Keywords : Developmental Biology * Issue 125 * live imaging * secondary palate * tissue fusion * cleft * craniofacial Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Developmental biology Impact factor: 1.232, year: 2016
SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation
Directory of Open Access Journals (Sweden)
Wu Yiquan
2017-08-01
Full Text Available To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the KSingular Value Decomposition (K-SVD method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.
Disparity Disambiguation by Fusion of Signal and Symbolic-Level Information
DEFF Research Database (Denmark)
Ralli, J.; Diaz, J.; Ros, E.
2012-01-01
We describe a method for resolving ambiguities in low-level disparity calculations in a stereo-vision scheme by using a recurrent mechanism that we call signal-symbol loop. Due to the local nature of low-level processing it is not always possible to estimate the correct disparity values produced...... at this level. Symbolic abstraction of the signal produces robust, high confidence, multimodal image features which can be used to interpret the scene more accurately and therefore disambiguate low-level interpretations by biasing the correct disparity. The fusion process is capable of producing more accurate...... dense disparity maps than the low- and symbolic-level algorithms can produce independently. Therefore we describe an efficient fusion scheme that allows symbolic- and low-level cues to complement each other, resulting in a more accurate and dense disparity representation of the scene....
Frequency tripling with multimode-lasers
International Nuclear Information System (INIS)
Langer, H.; Roehr, H.; Wrobel, W.G.
1978-10-01
The presence of different modes with random phases in a laser beam leads to fluctuations in nonlinear optical interactions. This paper describes the influence of the linewidth of a dye laser on the generation of intensive Lyman-alpha radiation by frequency tripling. Using this Lyman-alpha source for resonance scattering on strongly doppler-broadened lines in fusion plasmas the detection limit of neutral hydrogen is nearly two orders higher with the multimode than the singlemode dye laser. (orig.) [de
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Le, Minh Hung; Chen, Jingyu; Wang, Liang; Wang, Zhiwei; Liu, Wenyu; (Tim Cheng, Kwang-Ting; Yang, Xin
2017-08-01
Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our
Tissue identification with micro-magnetic resonance imaging in a caprine spinal fusion model
Uffen, M.; Krijnen, M.; Hoogendoorn, R.; Strijkers, G.; Everts, V.; Wuisman, P.; Smit, T.
2008-01-01
Nonunion is a major complication of spinal interbody fusion. Currently X-ray and computed tomography (CT) are used for evaluating the spinal fusion process. However, both imaging modalities have limitations in judgment of the early stages of this fusion process, as they only visualize mineralized
Spectral edge: gradient-preserving spectral mapping for image fusion.
Connah, David; Drew, Mark S; Finlayson, Graham D
2015-12-01
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
An enhanced approach for biomedical image restoration using image fusion techniques
Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.
2018-05-01
Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.
Research on Remote Sensing Image Classification Based on Feature Level Fusion
Yuan, L.; Zhu, G.
2018-04-01
Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.
Tissue imaging using full field optical coherence microscopy with short multimode fiber probe
Sato, Manabu; Eto, Kai; Goto, Tetsuhiro; Kurotani, Reiko; Abe, Hiroyuki; Nishidate, Izumi
2018-03-01
In achieving minimally invasive accessibility to deeply located regions the size of the imaging probes is important. We demonstrated full-field optical coherence tomography (FF-OCM) using an ultrathin forward-imaging short multimode fiber (SMMF) probe of 50 μm core diameter, 125 μm diameter, and 7.4 mm length for optical communications. The axial resolution was measured to be 2.14 μm and the lateral resolution was also evaluated to be below 4.38 μm using a test pattern (TP). The spatial mode and polarization characteristics of SMMF were evaluated. Inserting SMMF to in vivo rat brain, 3D images were measured and 2D information of nerve fibers was obtained. The feasibility of an SMMF as an ultrathin forward-imaging probe in FF-OCM has been demonstrated.
Energy Technology Data Exchange (ETDEWEB)
Lee, Y [University of Kansas Hospital, Kansas City, KS (United States); Fullerton, G; Goins, B [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States)
2015-06-15
Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement
Usefulness of CT based SPECT Fusion Image in the lung Disease : Preliminary Study
International Nuclear Information System (INIS)
Park, Hoon Hee; Lyu, Kwang Yeul; Kim, Tae Hyung; Shin, Ji Yun
2012-01-01
Recently, SPECT/CT system has been applied to many diseases, however, the application is not extensively applied at pulmonary disease. Especially, in case that, the pulmonary embolisms suspect at the CT images, SPECT is performed. For the accurate diagnosis, SPECT/CT tests are subsequently undergoing. However, without SPECT/CT, there are some limitations to apply these procedures. With SPECT/CT, although, most of the examination performed after CT. Moreover, such a test procedures generate unnecessary dual irradiation problem to the patient. In this study, we evaluated the amount of unnecessary irradiation, and the usefulness of fusion images of pulmonary disease, which independently acquired from SPECT and CT. Using NEMA PhantomTM (NU2-2001), SPECT and CT scan were performed for fusion images. From June 2011 to September 2010, 10 patients who didn't have other personal history, except lung disease were selected (male: 7, female: 3, mean age: 65.3±12.7). In both clinical patient and phantom data, the fusion images scored higher than SPECT and CT images. The fusion images, which is combined with pulmonary vessel images from CT and functional images from SPECT, can increase the detection possibility in detecting pulmonary embolism in the resin of lung parenchyma. It is sure that performing SPECT and CT in integral SPECT/CT system were better. However, we believe this protocol can give more informative data to have more accurate diagnosis in the hospital without integral SPECT/CT system.
Image fusion and denoising using fractional-order gradient information
DEFF Research Database (Denmark)
Mei, Jin-Jin; Dong, Yiqiu; Huang, Ting-Zhu
Image fusion and denoising are significant in image processing because of the availability of multi-sensor and the presence of the noise. The first-order and second-order gradient information have been effectively applied to deal with fusing the noiseless source images. In this paper, due to the adv...... show that the proposed method outperforms the conventional total variation in methods for simultaneously fusing and denoising....
Development of magneto-plasmonic nanoparticles for multimodal image-guided therapy to the brain
Tomitaka, Asahi; Arami, Hamed; Raymond, Andrea; Yndart, Adriana; Kaushik, Ajeet; Jayant, Rahul Dev; Takemura, Yasushi; Cai, Yong; Toborek, Michal; Nair, Madhavan
2017-01-01
Magneto-plasmonic nanoparticles are one of the emerging multi-functional materials in the field of nanomedicine. Their potential for targeting and multi-modal imaging is highly attractive. In this study, magnetic core / gold shell (MNP@Au) magneto-plasmonic nanoparticles were synthesized by citrate reduction of Au ion on magnetic nanoparticle seeds. Hydrodynamic size and optical property of magneto-plasmonic nanoparticles synthesized with the variation of Au ion and reducing agent concentrati...
Real-time image registration and fusion in a FPGA architecture (Ad-FIRE)
Waters, T.; Swan, L.; Rickman, R.
2011-06-01
Real-time Image Registration is a key processing requirement of Waterfall Solutions' image fusion system, Ad-FIRE, which combines the attributes of high resolution visible imagery with the spectral response of low resolution thermal sensors in a single composite image. Implementing image fusion at video frame rates typically requires a high bandwidth video processing capability which, within a standard CPU-type processing architecture, necessitates bulky, high power components. Field Programmable Gate Arrays (FPGAs) offer the prospect of low power/heat dissipation combined with highly efficient processing architectures for use in portable, battery-powered, passively cooled applications, such as Waterfall Solutions' hand-held or helmet-mounted Ad-FIRE system.
Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision
Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.
2018-01-01
The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.
Bianchi, Silvio; Rajamanickam, V.; Ferrara, Lorenzo; Di Fabrizio, Enzo M.; Liberale, Carlo; Di Leonardo, Roberto
2013-01-01
The use of individual multimode optical fibers in endoscopy applications has the potential to provide highly miniaturized and noninvasive probes for microscopy and optical micromanipulation. A few different strategies have been proposed recently, but they all suffer from intrinsically low resolution related to the low numerical aperture of multimode fibers. Here, we show that two-photon polymerization allows for direct fabrication of micro-optics components on the fiber end, resulting in an increase of the numerical aperture to a value that is close to 1. Coupling light into the fiber through a spatial light modulator, we were able to optically scan a submicrometer spot (300 nm FWHM) over an extended region, facing the opposite fiber end. Fluorescence imaging with improved resolution is also demonstrated. © 2013 Optical Society of America.
DEFF Research Database (Denmark)
Henriksen, O.M.; Lonsdale, M.N.; Jensen, T.D.
2008-01-01
. Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. Purpose: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation......Background: Although magnetic resonance imaging (MRI) is now considered the gold standard in second-line imaging of patients with suspected scaphoid fracture and negative radiographs, bone scintigraphy can be used in patients with pacemakers, metallic implants, or other contraindications to MRI....... Conclusion: Image fusion of planar bone scintigrams and radiographs has a significant influence on image interpretation and increases both diagnostic confidence and interobserver agreement Udgivelsesdato: 2008/12/3...
DEFF Research Database (Denmark)
Henriksen, Otto Mølby; Lonsdale, Markus Georg; Jensen, T D
2009-01-01
. Bone scintigraphy is highly sensitive for the detection of fractures, but exact localization of scintigraphic lesions may be difficult and can negatively affect diagnostic accuracy. PURPOSE: To investigate the influence of image fusion of planar bone scintigraphy and radiographs on image interpretation......BACKGROUND: Although magnetic resonance imaging (MRI) is now considered the gold standard in second-line imaging of patients with suspected scaphoid fracture and negative radiographs, bone scintigraphy can be used in patients with pacemakers, metallic implants, or other contraindications to MRI....... CONCLUSION: Image fusion of planar bone scintigrams and radiographs has a significant influence on image interpretation and increases both diagnostic confidence and interobserver agreement....
Enabling image fusion for a CT guided needle placement robot
Seifabadi, Reza; Xu, Sheng; Aalamifar, Fereshteh; Velusamy, Gnanasekar; Puhazhendi, Kaliyappan; Wood, Bradford J.
2017-03-01
Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm +/- 4.14mm and 1.74mm +/-1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.
Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing
2018-02-01
Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.
Bult, Wouter; Kroeze, Stephanie G. C.; Elschot, Mattijs; Seevinck, Peter R.; Beekman, Freek J.; de Jong, Hugo W. A. M.; Uges, Donald R. A.; Kosterink, Jos G. W.; Luijten, Peter R.; Hennink, Wim E.; Schip, Alfred D. van Het; Bosch, J. L. H. Ruud; Nijsen, J. Frank W.; Jans, Judith J. M.
2013-01-01
Purpose: The increasing incidence of small renal tumors in an aging population with comorbidities has stimulated the development of minimally invasive treatments. This study aimed to assess the efficacy and demonstrate feasibility of multimodality imaging of intratumoral administration of
Low, Kerwin; Elhadidi, Basman; Glauser, Mark
2009-11-01
Understanding the different noise production mechanisms caused by the free shear flows in a turbulent jet flow provides insight to improve ``intelligent'' feedback mechanisms to control the noise. Towards this effort, a control scheme is based on feedback of azimuthal pressure measurements in the near field of the jet at two streamwise locations. Previous studies suggested that noise reduction can be achieved by azimuthal actuators perturbing the shear layer at the jet lip. The closed-loop actuation will be based on a low-dimensional Fourier representation of the hydrodynamic pressure measurements. Preliminary results show that control authority and reduction in the overall sound pressure level was possible. These results provide motivation to move forward with the overall vision of developing innovative multi-mode sensing methods to improve state estimation and derive dynamical systems. It is envisioned that estimating velocity-field and dynamic pressure information from various locations both local and in the far-field regions, sensor fusion techniques can be utilized to ascertain greater overall control authority.
Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
Directory of Open Access Journals (Sweden)
Nakamura Satoshi
2004-01-01
Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
International Nuclear Information System (INIS)
Lemke, A.J.; Niehues, S.M.; Amthauer, H.; Felix, R.; Rohlfing, T.; Hosten, N.
2004-01-01
Purpose: To evaluate the feasibility and the clinical benefits of retrospective digital image fusion (PET, SPECT, CT and MRI). Materials and methods: In a prospective study, a total of 273 image fusions were performed and evaluated. The underlying image acquisitions (CT, MRI, SPECT and PET) were performed in a way appropriate for the respective clinical question and anatomical region. Image fusion was executed with a software program developed during this study. The results of the image fusion procedure were evaluated in terms of technical feasibility, clinical objective, and therapeutic impact. Results: The most frequent combinations of modalities were CT/PET (n = 156) and MRI/PET (n = 59), followed by MRI/SPECT (n = 28), CT/SPECT (n = 22) and CT/MRI (n = 8). The clinical questions included following regions (more than one region per case possible): neurocranium (n = 42), neck (n = 13), lung and mediastinum (n = 24), abdomen (n = 181), and pelvis (n = 65). In 92.6% of all cases (n = 253), image fusion was technically successful. Image fusion was able to improve sensitivity and specificity of the single modality, or to add important diagnostic information. Image fusion was problematic in cases of different body positions between the two imaging modalities or different positions of mobile organs. In 37.9% of the cases, image fusion added clinically relevant information compared to the single modality. Conclusion: For clinical questions concerning liver, pancreas, rectum, neck, or neurocranium, image fusion is a reliable method suitable for routine clinical application. Organ motion still limits its feasibility and routine use in other areas (e.g., thorax). (orig.)
International Nuclear Information System (INIS)
Peter, Joerg; Semmler, Wolfhard
2007-01-01
Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems
International Nuclear Information System (INIS)
Langer, R. D.; Gorkom, K. Neidl van.; Kaabi, Ho Al.; Torab, F.; Czechowski, J.; Nagi, M.; Ashish, G. M.
2007-01-01
Full text: The aim of the study was to validate a multimodality cranial computed tomography (CCT) protocol for patients with acute stroke in the United Arab Emirates as a basic imaging procedure for a stroke unit. Therefore, a comparative study was conducted between two groups: retrospective, historical group 1 with early unenhanced CCT and prospective group 2 undergoing a multimodality CCT protocol. Follow-up unenhanced CCT >48 h served as gold standard in both groups. Group 1: Early unenhanced CCT of 50 patients were evaluated retrospectively, using Alberta Stroke Program Early CT Score, and compared with the definite infarction on follow-up CCT. Group 2: 50 patients underwent multimodality CCT (unenhanced CCT, perfusion studies: cerebral blood flow, cerebral blood volume, mean transit time and CT angiography) <8 h after clinical onset and follow-up studies. Modified National Institute of Health Stroke Scale was used clinically in both groups. Group 1 showed 38 men, 12 women, clinical onset 2-8 h before CCT and modified National Institute of Health Stroke Scale 0-28. Group 2 included 38 men, 12 women, onset 3-8 h before CCT, modified National Institute of Health Stroke Scale 0-28. Sensitivity was 58.3% in group 1 and 84.2% in group 2. Computed tomography angiography detected nine intracranial occlusions/stenoses. The higher sensitivity of the multimodality CCT protocol justifies its use as a basic diagnostic tool for the set-up of a first-stroke unit in the United Arab Emirates
Zhang, Xiangyang; Zhang, Hao F.; Zhou, Lixiang; Jiao, Shuliang
2012-02-01
We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective vs. exacerbate) in the RPE in the aging process. We successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.
Coherence imaging spectro-polarimetry for magnetic fusion diagnostics
International Nuclear Information System (INIS)
Howard, J
2010-01-01
This paper presents an overview of developments in imaging spectro-polarimetry for magnetic fusion diagnostics. Using various multiplexing strategies, it is possible to construct optical polarization interferometers that deliver images of underlying physical parameters such as flow speed, temperature (Doppler effect) or magnetic pitch angle (motional Stark and Zeeman effects). This paper also describes and presents first results for a new spatial heterodyne interferometric system used for both Doppler and polarization spectroscopy.
Three-dimensional Image Fusion Guidance for Transjugular Intrahepatic Portosystemic Shunt Placement.
Tacher, Vania; Petit, Arthur; Derbel, Haytham; Novelli, Luigi; Vitellius, Manuel; Ridouani, Fourat; Luciani, Alain; Rahmouni, Alain; Duvoux, Christophe; Salloum, Chady; Chiaradia, Mélanie; Kobeiter, Hicham
2017-11-01
To assess the safety, feasibility and effectiveness of image fusion guidance with pre-procedural portal phase computed tomography with intraprocedural fluoroscopy for transjugular intrahepatic portosystemic shunt (TIPS) placement. All consecutive cirrhotic patients presenting at our interventional unit for TIPS creation from January 2015 to January 2016 were prospectively enrolled. Procedures were performed under general anesthesia in an interventional suite equipped with flat panel detector, cone-beam computed tomography (CBCT) and image fusion technique. All TIPSs were placed under image fusion guidance. After hepatic vein catheterization, an unenhanced CBCT acquisition was performed and co-registered with the pre-procedural portal phase CT images. A virtual path between hepatic vein and portal branch was made using the virtual needle path trajectory software. Subsequently, the 3D virtual path was overlaid on 2D fluoroscopy for guidance during portal branch cannulation. Safety, feasibility, effectiveness and per-procedural data were evaluated. Sixteen patients (12 males; median age 56 years) were included. Procedures were technically feasible in 15 of the 16 patients (94%). One procedure was aborted due to hepatic vein catheterization failure related to severe liver distortion. No periprocedural complications occurred within 48 h of the procedure. The median dose-area product was 91 Gy cm 2 , fluoroscopy time 15 min, procedure time 40 min and contrast media consumption 65 mL. Clinical benefit of the TIPS placement was observed in nine patients (56%). This study suggests that 3D image fusion guidance for TIPS is feasible, safe and effective. By identifying virtual needle path, CBCT enables real-time multiplanar guidance and may facilitate TIPS placement.
Multimodality medical image database for temporal lobe epilepsy
Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost
2003-05-01
This paper presents the development of a human brain multi-modality database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as non-verbal Wechsler memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication matches the neurosurgeons expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.
Multimodal imaging evaluation in staging of rectal cancer
Heo, Suk Hee; Kim, Jin Woong; Shin, Sang Soo; Jeong, Yong Yeon; Kang, Heoung-Keun
2014-01-01
Rectal cancer is a common cancer and a major cause of mortality in Western countries. Accurate staging is essential for determining the optimal treatment strategies and planning appropriate surgical procedures to control rectal cancer. Endorectal ultrasonography (EUS) is suitable for assessing the extent of tumor invasion, particularly in early-stage or superficial rectal cancer cases. In advanced cases with distant metastases, computed tomography (CT) is the primary approach used to evaluate the disease. Magnetic resonance imaging (MRI) is often used to assess preoperative staging and the circumferential resection margin involvement, which assists in evaluating a patient’s risk of recurrence and their optimal therapeutic strategy. Positron emission tomography (PET)-CT may be useful in detecting occult synchronous tumors or metastases at the time of initial presentation. Restaging after neoadjuvant chemoradiotherapy (CRT) remains a challenge with all modalities because it is difficult to reliably differentiate between the tumor mass and other radiation-induced changes in the images. EUS does not appear to have a useful role in post-therapeutic response assessments. Although CT is most commonly used to evaluate treatment responses, its utility for identifying and following-up metastatic lesions is limited. Preoperative high-resolution MRI in combination with diffusion-weighted imaging, and/or PET-CT could provide valuable prognostic information for rectal cancer patients with locally advanced disease receiving preoperative CRT. Based on these results, we conclude that a combination of multimodal imaging methods should be used to precisely assess the restaging of rectal cancer following CRT. PMID:24764662
Multimodal imaging of language reorganization in patients with left temporal lobe epilepsy.
Chang, Yu-Hsuan A; Kemmotsu, Nobuko; Leyden, Kelly M; Kucukboyaci, N Erkut; Iragui, Vicente J; Tecoma, Evelyn S; Kansal, Leena; Norman, Marc A; Compton, Rachelle; Ehrlich, Tobin J; Uttarwar, Vedang S; Reyes, Anny; Paul, Brianna M; McDonald, Carrie R
2017-07-01
This study explored the relationships among multimodal imaging, clinical features, and language impairment in patients with left temporal lobe epilepsy (LTLE). Fourteen patients with LTLE and 26 controls underwent structural MRI, functional MRI, diffusion tensor imaging, and neuropsychological language tasks. Laterality indices were calculated for each imaging modality and a principal component (PC) was derived from language measures. Correlations were performed among imaging measures, as well as to the language PC. In controls, better language performance was associated with stronger left-lateralized temporo-parietal and temporo-occipital activations. In LTLE, better language performance was associated with stronger right-lateralized inferior frontal, temporo-parietal, and temporo-occipital activations. These right-lateralized activations in LTLE were associated with right-lateralized arcuate fasciculus fractional anisotropy. These data suggest that interhemispheric language reorganization in LTLE is associated with alterations to perisylvian white matter. These concurrent structural and functional shifts from left to right may help to mitigate language impairment in LTLE. Copyright © 2017 Elsevier Inc. All rights reserved.
Multimedia image and video processing
Guan, Ling
2012-01-01
As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w
Label fusion based brain MR image segmentation via a latent selective model
Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu
2018-04-01
Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.
Li, Fang-Ye; Chen, Xiao-Lei; Xu, Bai-Nan
2016-09-01
To determine the beneficial effects of intraoperative high-field magnetic resonance imaging (MRI), multimodal neuronavigation, and intraoperative electrophysiological monitoring-guided surgery for treating supratentorial cavernomas. Twelve patients with 13 supratentorial cavernomas were prospectively enrolled and operated while using a 1.5 T intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. All cavernomas were deeply located in subcortical areas or involved critical areas. Intraoperative high-field MRIs were obtained for the intraoperative "visualization" of surrounding eloquent structures, "brain shift" corrections, and navigational plan updates. All cavernomas were successfully resected with guidance from intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. In 5 cases with supratentorial cavernomas, intraoperative "brain shift" severely deterred locating of the lesions; however, intraoperative MRI facilitated precise locating of these lesions. During long-term (>3 months) follow-up, some or all presenting signs and symptoms improved or resolved in 4 cases, but were unchanged in 7 patients. Intraoperative high-field MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring are helpful in surgeries for the treatment of small deeply seated subcortical cavernomas.
CT-MR image data fusion for computer assisted navigated neurosurgery of temporal bone tumors
International Nuclear Information System (INIS)
Nemec, Stefan Franz; Donat, Markus Alexander; Mehrain, Sheida; Friedrich, Klaus; Krestan, Christian; Matula, Christian; Imhof, Herwig; Czerny, Christian
2007-01-01
Purpose: To demonstrate the value of multi detector computed tomography (MDCT) and magnetic resonance imaging (MRI) in the preoperative work up of temporal bone tumors and to present, especially, CT and MR image fusion for surgical planning and performance in computer assisted navigated neurosurgery of temporal bone tumors. Materials and methods: Fifteen patients with temporal bone tumors underwent MDCT and MRI. MDCT was performed in high-resolution bone window level setting in axial plane. The reconstructed MDCT slice thickness was 0.8 mm. MRI was performed in axial and coronal plane with T2-weighted fast spin-echo (FSE) sequences, un-enhanced and contrast-enhanced T1-weighted spin-echo (SE) sequences, and coronal T1-weighted SE sequences with fat suppression and with 3D T1-weighted gradient-echo (GE) contrast-enhanced sequences in axial plane. The 3D T1-weighted GE sequence had a slice thickness of 1 mm. Image data sets of CT and 3D T1-weighted GE sequences were merged utilizing a workstation to create CT-MR fusion images. MDCT and MR images were separately used to depict and characterize lesions. The fusion images were utilized for interventional planning and intraoperative image guidance. The intraoperative accuracy of the navigation unit was measured, defined as the deviation between the same landmark in the navigation image and the patient. Results: Tumorous lesions of bone and soft tissue were well delineated and characterized by CT and MR images. The images played a crucial role in the differentiation of benign and malignant pathologies, which consisted of 13 benign and 2 malignant tumors. The CT-MR fusion images supported the surgeon in preoperative planning and improved surgical performance. The mean intraoperative accuracy of the navigation system was 1.25 mm. Conclusion: CT and MRI are essential in the preoperative work up of temporal bone tumors. CT-MR image data fusion presents an accurate tool for planning the correct surgical procedure and is a
Directory of Open Access Journals (Sweden)
Feng Yang
2013-06-01
Full Text Available Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI and sum of squared differences (SSD cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD based similarity metrics with the normalized mutual information (NMI using the diffeomorphic free-form deformation (FFD model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD or weighted structural similarity (wldWSSIM. Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI, the SSD on entropy images (ESSD and the ESSD-NMI in terms of registration accuracy and computation efficiency.
The fusion of large scale classified side-scan sonar image mosaics.
Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan
2006-07-01
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
Visualization of graphical information fusion results
Blasch, Erik; Levchuk, Georgiy; Staskevich, Gennady; Burke, Dustin; Aved, Alex
2014-06-01
Graphical fusion methods are popular to describe distributed sensor applications such as target tracking and pattern recognition. Additional graphical methods include network analysis for social, communications, and sensor management. With the growing availability of various data modalities, graphical fusion methods are widely used to combine data from multiple sensors and modalities. To better understand the usefulness of graph fusion approaches, we address visualization to increase user comprehension of multi-modal data. The paper demonstrates a use case that combines graphs from text reports and target tracks to associate events and activities of interest visualization for testing Measures of Performance (MOP) and Measures of Effectiveness (MOE). The analysis includes the presentation of the separate graphs and then graph-fusion visualization for linking network graphs for tracking and classification.
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
Neutron imaging for inertial confinement fusion and molecular optic imaging
International Nuclear Information System (INIS)
Delage, O.
2010-01-01
Scientific domains that require imaging of micrometric/nano-metric objects are dramatically increasing (Plasma Physics, Astrophysics, Biotechnology, Earth Sciences...). Difficulties encountered in imaging smaller and smaller objects make this research area more and more challenging and in constant evolution. The two scientific domains, through which this study has been led, are the neutron imaging in the context of the inertial confinement fusion and the fluorescence molecular imaging. Work presented in this thesis has two main objectives. The first one is to describe the instrumentation characteristics that require such imagery and, relatively to the scientific domains considered, identify parameters likely to optimize the imaging system accuracy. The second one is to present the developed data analysis and reconstruction methods able to provide spatial resolution adapted to the size of the observed object. Similarities of numerical algorithms used in these two scientific domains, which goals are quiet different, show how micrometric/nano-metric object imaging is a research area at the border of a large number of scientific disciplines. (author)
Kim, Su Wan; Song, Heesung
2017-12-01
We report the case of a 19-year-old man who presented with a 12-year history of progressive fatigue, feeling hot, excessive sweating, and numbness in the left arm. He had undergone multimodal imaging and was diagnosed as having Klippel-Trénaunay-Weber syndrome (KTWS). This is a rare congenital disease, defined by combinations of nevus flammeus, venous and lymphatic malformation, and hypertrophy of the affected limbs. Lower extremities are affected mostly. Conventional modalities for evaluating KTWS are ultrasonography, CT, MRI, lymphoscintigraphy, and angiography. There are few reports on multimodal imaging of upper extremities of KTWS patients, and this is the first report of an infrared thermography in KTWS.
Image evaluation of HIV encephalopathy: a multimodal approach using quantitative MR techniques
Energy Technology Data Exchange (ETDEWEB)
Prado, Paulo T.C.; Escorsi-Rosset, Sara [University of Sao Paulo, Radiology Section, Internal Medicine Department, Ribeirao Preto School of Medicine, Sao Paulo (Brazil); Cervi, Maria C. [University of Sao Paulo, Department of Pediatrics, Ribeirao Preto School of Medicine, Sao Paulo (Brazil); Santos, Antonio Carlos [University of Sao Paulo, Radiology Section, Internal Medicine Department, Ribeirao Preto School of Medicine, Sao Paulo (Brazil); Hospital das Clinicas da FMRP-USP, Ribeirao Preto, SP (Brazil)
2011-11-15
A multimodal approach of the human immunodeficiency virus (HIV) encephalopathy using quantitative magnetic resonance (MR) techniques can demonstrate brain changes not detectable only with conventional magnetic resonance imaging (MRI). The aim of this study was to compare conventional MRI and MR quantitative techniques, such as magnetic resonance spectroscopy (MRS) and relaxometry and to determine whether quantitative techniques are more sensitive than conventional imaging for brain changes caused by HIV infection. We studied prospectively nine HIV positive children (mean age 6 years, from 5 to 8 years old) and nine controls (mean age 7.3 years; from 3 to 10 years), using MRS and relaxometry. Examinations were carried on 1.5-T equipment. HIV-positive patients presented with only minor findings and all control patients had normal conventional MR findings. MRS findings showed an increase in choline to creatine (CHO/CRE) ratios bilaterally in both frontal gray and white matter, in the left parietal white matter, and in total CHO/CRE ratio. In contrast, N-acetylaspartate to creatine (NAA/CRE) ratios did not present with any significant difference between both groups. Relaxometry showed significant bilateral abnormalities, with lengthening of the relaxation time in HIV positive in many regions. Conventional MRI is not sensitive for early brain changes caused by HIV infection. Quantitative techniques such as MRS and relaxometry appear as valuable tools in the diagnosis of these early changes. Therefore, a multimodal quantitative study can be useful in demonstrating and understanding the physiopathology of the disease. (orig.)
MULTIMODAL IMAGING OF ANGIOID STREAKS ASSOCIATED WITH TURNER SYNDROME.
Chiu, Bing Q; Tsui, Edmund; Hussnain, Syed Amal; Barbazetto, Irene A; Smith, R Theodore
2018-02-13
To report multimodal imaging in a novel case of angioid streaks in a patient with Turner syndrome with 10-year follow-up. Case report of a patient with Turner syndrome and angioid streaks followed at Bellevue Hospital Eye Clinic from 2007 to 2017. Fundus photography, fluorescein angiography, and optical coherence tomography angiography were obtained. Angioid streaks with choroidal neovascularization were noted in this patient with Turner syndrome without other systemic conditions previously correlated with angioid streaks. We report a case of angioid streaks with choroidal neovascularization in a patient with Turner syndrome. We demonstrate that angioid streaks, previously associated with pseudoxanthoma elasticum, Ehlers-Danlos syndrome, Paget disease of bone, and hemoglobinopathies, may also be associated with Turner syndrome, and may continue to develop choroidal neovascularization, suggesting the need for careful ophthalmic examination in these patients.
Directory of Open Access Journals (Sweden)
Daniela Cavalcanti Ferrara
2009-10-01
Full Text Available A tomografia de coerência óptica incorporou-se gradativamente ao contemporâneo arsenal diagnóstico em Oftalmologia, passando a exercer papel fundamental na investigação e condução de doenças oculares, particularmente na especialidade de Retina e Vítreo. A disponibilização comercial da nova geração de aparelhos, chamada de tomografia de coerência óptica "espectral", baseada em conceito físico distinto que permite a aquisição de imagens em alta velocidade, marcou o início de uma nova era desta tecnologia de investigação auxiliar. Adicionalmente, sua recente combinação com o oftalmoscópio de varredura a laser confocal (confocal scanning laser ophthalmoscope vem propiciando a aquisição de imagens tomográficas guiadas em tempo real pelos diferentes modos de imagem (autofluorescência de fundo, reflectância com luz "infravermelha" e angiografia com fluoresceína ou indocianina verde. A avaliação ocular multimodal (multimodal fundus imaging permite a correlação real e minuciosa de achados da morfologia retiniana e do epitélio pigmentar com dados de estudos angiográficos e de autofluorescência ou reflectância, propiciando assim inferências valiosas sobre a fisiologia do tecido. Neste artigo, discutimos brevemente as possíveis implicações da avaliação ocular multimodal na prática da especialidade de Retina e Vítreo.Optical coherence tomography was progressively incorporated to the contemporary diagnostic arsenal in Ophthalmology, playing a crucial role in the diagnosis and management of eye diseases, particularly in the specialty of retina and vitreous. The commercial availability of the new generation of devices, coined "spectral" optical coherence tomography, which is based in a distinct physical concept that permits high-speed image acquisition, launched a new era for this investigative ancillary tool. In addition, the recent combination of this new technology with a confocal scanning laser ophthalmoscope
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
Mitral Valve Prolapse: Multimodality Imaging and Genetic Insights.
Parwani, Purvi; Avierinos, Jean-Francois; Levine, Robert A; Delling, Francesca N
Mitral valve prolapse (MVP) is a common heritable valvulopathy affecting approximately 2.4% of the population. It is the most important cause of primary mitral regurgitation (MR) requiring surgery. MVP is characterized by fibromyxomatous changes and displacement of one or both mitral leaflets into the left atrium. Echocardiography represents the primary diagnostic modality for assessment of MVP. Accurate quantitation of ventricular volumes and function for surgical planning in asymptomatic severe MR can be provided with both echocardiography and cardiac magnetic resonance. In addition, assessment of myocardial fibrosis using late gadolinium enhancement and T1 mapping allows better understanding of the impact of MVP on the myocardium. Imaging in MVP is important not only for diagnostic and prognostic purposes, but is also essential for detailed phenotyping in genetic studies. Genotype-phenotype studies in MVP pedigrees have allowed the identification of milder, non-diagnostic MVP morphologies by echocardiography. Such morphologies represent early expression of MVP in gene carriers. This review focuses on multimodality imaging and the phenotypic spectrum of MVP. Moreover, the review details the recent genetic discoveries that have increased our understanding of the pathophysiology of MVP, with clues to mechanisms and therapy. Copyright © 2017 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Clevert, D.A.; Helck, A.; Paprottka, P.M.; Trumm, C.; Reiser, M.F.; Zengel, P.
2012-01-01
Abdominal ultrasound is often the first-line imaging modality for assessing focal liver lesions. Due to various new ultrasound techniques, such as image fusion, global positioning system (GPS) tracking and needle tracking guided biopsy, abdominal ultrasound now has great potential regarding detection, characterization and treatment of focal liver lesions. Furthermore, these new techniques will help to improve the clinical management of patients before and during interventional procedures. This article presents the principle and clinical impact of recently developed techniques in the field of ultrasound, e.g. image fusion, GPS tracking and needle tracking guided biopsy and discusses the results based on a feasibility study on 20 patients with focal hepatic lesions. (orig.) [de
Development and application of PET-MRI image fusion technology
International Nuclear Information System (INIS)
Song Jianhua; Zhao Jinhua; Qiao Wenli
2011-01-01
The emerging and growing in popularity of PET-CT scanner brings us the convenience and cognizes the advantages such as diagnosis, staging, curative effect evaluation and prognosis for malignant tumor. And the PET-MRI installing maybe a new upsurge when the machine gradually mature, because of the MRI examination without the radiation exposure and with the higher soft tissue resolution. This paper summarized the developing course of image fusion technology and some researches of clinical application about PET-MRI at present, in order to help people to understand the functions and know its wide application of the upcoming new instrument, mainly focuses the application on the central nervous system and some soft tissue lesions. And before PET-MRI popularization, people can still carry out some researches of various image fusion and clinical application on the current equipment. (authors)
Study on Efficiency of Fusion Techniques for IKONOS Images
International Nuclear Information System (INIS)
Liu, Yanmei; Yu, Haiyang; Guijun, Yang; Nie, Chenwei; Yang, Xiaodong; Ren, Dong
2014-01-01
Many image fusion techniques have been proposed to achieve optimal resolution in the spatial and spectral domains. Six different merging methods were listed in this paper and the efficiency of fusion techniques was assessed in qualitative and quantitative aspect. Both local and global evaluation parameters were used in the spectral quality and a Laplace filter method was used in spatial quality assessment. By simulation, the spectral quality of the images merged by Brovery was demonstrated to be the worst. In contrast, GS and PCA algorithms, especially the Pansharpening provided higher spectral quality than the standard Brovery, wavelet and CN methods. In spatial quality assessment, the CN method represented best compared with that of others, while the Brovery algorithm was worst. The wavelet parameters that performed best achieved acceptable spectral and spatial quality compared to the others
Image Fusion Technologies In Commercial Remote Sensing Packages
Al-Wassai, Firouz Abdullah; Kalyankar, N. V.
2013-01-01
Several remote sensing software packages are used to the explicit purpose of analyzing and visualizing remotely sensed data, with the developing of remote sensing sensor technologies from last ten years. Accord-ing to literature, the remote sensing is still the lack of software tools for effective information extraction from remote sensing data. So, this paper provides a state-of-art of multi-sensor image fusion technologies as well as review on the quality evaluation of the single image or f...
Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun
2017-01-01
To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5-1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.
MMX-I: data-processing software for multimodal X-ray imaging and tomography
Energy Technology Data Exchange (ETDEWEB)
Bergamaschi, Antoine, E-mail: antoine.bergamaschi@synchrotron-soleil.fr; Medjoubi, Kadda [Synchrotron SOLEIL, BP 48, Saint-Aubin, 91192 Gif sur Yvette (France); Messaoudi, Cédric; Marco, Sergio [Université Paris-Saclay, CNRS, Université Paris-Saclay, F-91405 Orsay (France); Institut Curie, INSERM, PSL Reseach University, F-91405 Orsay (France); Somogyi, Andrea [Synchrotron SOLEIL, BP 48, Saint-Aubin, 91192 Gif sur Yvette (France)
2016-04-12
The MMX-I open-source software has been developed for processing and reconstruction of large multimodal X-ray imaging and tomography datasets. The recent version of MMX-I is optimized for scanning X-ray fluorescence, phase-, absorption- and dark-field contrast techniques. This, together with its implementation in Java, makes MMX-I a versatile and friendly user tool for X-ray imaging. A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.
Terzic, A; Schouman, T; Scolozzi, P
2013-08-06
The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.
Directory of Open Access Journals (Sweden)
Hirofumi Fujii
2012-01-01
Full Text Available Purpose. We aimed to clearly visualize heterogeneous distribution of hypoxia-inducible factor 1α (HIF activity in tumor tissues in vivo. Methods. We synthesized of 125I-IPOS, a 125I labeled chimeric protein probe, that would visualize HIF activity. The biodistribution of 125I-IPOS in FM3A tumor-bearing mice was evaluated. Then, the intratumoral localization of this probe was observed by autoradiography, and it was compared with histopathological findings. The distribution of 125I-IPOS in tumors was imaged by a small animal SPECT/CT scanner. The obtained in vivo SPECT-CT fusion images were compared with ex vivo images of excised tumors. Fusion imaging with MRI was also examined. Results. 125I-IPOS well accumulated in FM3A tumors. The intratumoral distribution of 125I-IPOS by autoradiography was quite heterogeneous, and it partially overlapped with that of pimonidazole. High-resolution SPECT-CT fusion images successfully demonstrated the heterogeneity of 125I-IPOS distribution inside tumors. SPECT-MRI fusion images could give more detailed information about the intratumoral distribution of 125I-IPOS. Conclusion. High-resolution SPECT images successfully demonstrated heterogeneous intratumoral distribution of 125I-IPOS. SPECT-CT fusion images, more favorably SPECT-MRI fusion images, would be useful to understand the features of heterogeneous intratumoral expression of HIF activity in vivo.
SPECT/CT image fusion with 99mTc-HYNIC-TOC in the oncological diagnostic
International Nuclear Information System (INIS)
Haeusler, F.
2006-07-01
Neuroendocrine tumours displaying somatostatin receptors have been successfully visualized with somatostatin receptor imaging. The aim of this retrospective study was to evaluate the value of anatomical-functional image fusion. Image fusion means the combined transmission and emission tomography (computed tomography (CT)) and single-photon emission computed tomography (SPECT) ) and was analyzed in comparison with SPECT and CT alone. Fifty-three patients (30 men and 23 women; mean age 55,9 years; range: 20-82 years) with suspected or known endocrine tumours were studied. The patients were referred to image fusion because of staging of newly diagnosed tumours (14) or biochemically/clinically suspected neuroendocrine tumour (20) or follow-up studies after therapy (19). The patients were studied with SPECT at 2 and 4 hours after injection of 400 MBq of 99mTc-EDDA-HYNIC-Tyr3-octreotide using a dual-detector scintillation camera. The CT was performed on one of the following two days. For both investigations the patients were fixed in an individualized vacuum mattress to guarantee exactly the same position. SPECT and SPECT/CT showed an equivalent scan result in 35 patients (66 %), discrepancies were found in 18 cases (34 %). After image fusion the scan result was true-positive in 27 patients ( 50.9 %) and true-negative in 25 patients (47.2 %). One patient with multiple small liver metastases escaped SPECT as well as image fusion and was so false-negative. The frequency of equivocal and probable lesion characterization was reduced by 11.6% (12 to 0) with PET/CT in comparison with PET or CT alone. The frequency of definite lesion characterization was increased by 11.6% (91 to 103). SPECT/CT affected the clinical management in 21 patients (40 %). The results of this study indicate that SPECT/CT is a valuable tool for the assessment of neuroendocrine tumours. SPECT/CT is better than SPECT or CT alone and it allows a more precise staging and determination of prognosis and
Self-assessed performance improves statistical fusion of image labels
Energy Technology Data Exchange (ETDEWEB)
Bryan, Frederick W., E-mail: frederick.w.bryan@vanderbilt.edu; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Reich, Daniel S. [Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland 20892 (United States); Landman, Bennett A. [Electrical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235 (United States); and Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37235 (United States)
2014-03-15
Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance
Self-assessed performance improves statistical fusion of image labels
International Nuclear Information System (INIS)
Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.
2014-01-01
Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance
Group-sparse representation with dictionary learning for medical image denoising and fusion.
Li, Shutao; Yin, Haitao; Fang, Leyuan
2012-12-01
Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.
Afshar, Solmaz F; Zawaski, Janice A; Inoue, Taeko; Rendon, David A; Zieske, Arthur W; Punia, Jyotinder N; Sabek, Omaima M; Gaber, M Waleed
2017-07-01
The abscopal effect is the response to radiation at sites that are distant from the irradiated site of an organism, and it is thought to play a role in bone marrow (BM) recovery by initiating responses in the unirradiated bone marrow. Understanding the mechanism of this effect has applications in treating BM failure (BMF) and BM transplantation (BMT), and improving survival of nuclear disaster victims. Here, we investigated the use of multimodality imaging as a translational tool to longitudinally assess bone marrow recovery. We used positron emission tomography/computed tomography (PET/CT), magnetic resonance imaging (MRI) and optical imaging to quantify bone marrow activity, vascular response and marrow repopulation in fully and partially irradiated rodent models. We further measured the effects of radiation on serum cytokine levels, hematopoietic cell counts and histology. PET/CT imaging revealed a radiation-induced increase in proliferation in the shielded bone marrow (SBM) compared to exposed bone marrow (EBM) and sham controls. T 2 -weighted MRI showed radiation-induced hemorrhaging in the EBM and unirradiated SBM. In the EBM and SBM groups, we found alterations in serum cytokine and hormone levels and in hematopoietic cell population proportions, and histological evidence of osteoblast activation at the bone marrow interface. Importantly, we generated a BMT mouse model using fluorescent-labeled bone marrow donor cells and performed fluorescent imaging to reveal the migration of bone marrow cells from shielded to radioablated sites. Our study validates the use of multimodality imaging to monitor bone marrow recovery and provides evidence for the abscopal response in promoting bone marrow recovery after irradiation.
Directory of Open Access Journals (Sweden)
Siddeshappa Nandish
2017-12-01
Conclusion: The end resultant fused images are validated by the radiologists and mutual information measure is used to validate registration results. It is found that CT and MRI sequence with more number of slices gave promising results. Few cases with deformation during misregistrations recorded with low mutual information of about 0.3 and which is not acceptable and few recorded with 0.6 and above mutual information during registration gives promising results.
Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.
Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping
2018-03-23
Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging
Clinical value of CT/MR-US fusion imaging for radiofrequency ablation of hepatic nodules
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Young, E-mail: leejy4u@snu.ac.kr [Department of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of); Choi, Byung Ihn [Department of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of); Chung, Yong Eun [Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Kim, Min Wook; Kim, Se Hyung; Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)
2012-09-15
Objective: The aim of this study was to determine the registration error of an ultrasound (US) fusion imaging system during an ex vivo study and its clinical value for percutaneous radiofrequency ablation (pRFA) during an in vivo study. Materials and methods: An ex vivo study was performed using 4 bovine livers and 66 sonographically invisible lead pellets. Real-time CT-US fusion imaging was applied to assist the targeting of pellets with needles in each liver; the 4 sessions were performed by either an experienced radiologist (R1, 3 sessions) or an inexperienced resident (R2, 1 session). The distance between the pellet target and needle was measured. An in vivo study was retrospectively performed with 51 nodules (42 HCCs and 9 metastases; mean diameter, 16 mm) of 37 patients. Fusion imaging was used to create a sufficient safety margin (>5 mm) during pRFA in 24 nodules (group 1), accurately target 21 nodules obscured in the US images (group 2) and precisely identify 6 nodules surrounded by similar looking nodules (group 3). Image fusion was achieved using MR and CT images in 16 and 21 patients, respectively. The reablation rate, 1-year local recurrence rate and complications were assessed. Results: In the ex vivo study, the mean target–needle distances were 2.7 mm ± 1.9 mm (R1) and 3.1 ± 3.3 mm (R2) (p > 0.05). In the in vivo study, the reablation rates in groups 1–3 were 13%, 19% and 0%, respectively. At 1 year, the local recurrence rate was 11.8% (6/51). In our assessment of complications, one bile duct injury was observed. Conclusion: US fusion imaging system has an acceptable registration error and can be an efficacious tool for overcoming the major limitations of US-guided pRFA.
Directory of Open Access Journals (Sweden)
Jae Won Bang
2015-05-01
Full Text Available With the rapid increase of 3-dimensional (3D content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs, biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM is proposed based on the multimodalities of EEG signals, eye blinking rate (BR, facial temperature (FT, and subjective evaluation (SE; second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display, we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size.
A hybrid image fusion system for endovascular interventions of peripheral artery disease.
Lalys, Florent; Favre, Ketty; Villena, Alexandre; Durrmann, Vincent; Colleaux, Mathieu; Lucas, Antoine; Kaladji, Adrien
2018-03-16
Interventional endovascular treatment has become the first line of management in the treatment of peripheral artery disease (PAD). However, contrast and radiation exposure continue to limit the feasibility of these procedures. This paper presents a novel hybrid image fusion system for endovascular intervention of PAD. We present two different roadmapping methods from intra- and pre-interventional imaging that can be used either simultaneously or independently, constituting the navigation system. The navigation system is decomposed into several steps that can be entirely integrated within the procedure workflow without modifying it to benefit from the roadmapping. First, a 2D panorama of the entire peripheral artery system is automatically created based on a sequence of stepping fluoroscopic images acquired during the intra-interventional diagnosis phase. During the interventional phase, the live image can be synchronized on the panorama to form the basis of the image fusion system. Two types of augmented information are then integrated. First, an angiography panorama is proposed to avoid contrast media re-injection. Information exploiting the pre-interventional computed tomography angiography (CTA) is also brought to the surgeon by means of semiautomatic 3D/2D registration on the 2D panorama. Each step of the workflow was independently validated. Experiments for both the 2D panorama creation and the synchronization processes showed very accurate results (errors of 1.24 and [Formula: see text] mm, respectively), similarly to the registration on the 3D CTA (errors of [Formula: see text] mm), with minimal user interaction and very low computation time. First results of an on-going clinical study highlighted its major clinical added value on intraoperative parameters. No image fusion system has been proposed yet for endovascular procedures of PAD in lower extremities. More globally, such a navigation system, combining image fusion from different 2D and 3D image
Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection.
Wei, Pan; Ball, John E; Anderson, Derek T
2018-03-17
A significant challenge in object detection is accurate identification of an object's position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications.
Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection
Directory of Open Access Journals (Sweden)
Pan Wei
2018-03-01
Full Text Available A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS applications.
Laser injury and in vivo multimodal imaging using a mouse model
Pocock, Ginger M.; Boretsky, Adam; Gupta, Praveena; Oliver, Jeff W.; Motamedi, Massoud
2011-03-01
Balb/c wild type mice were used to perform in vivo experiments of laser-induced thermal damage to the retina. A Heidelberg Spectralis HRA confocal scanning laser ophthalmoscope with a spectral domain optical coherence tomographer was used to obtain fundus and cross-sectional images of laser induced injury in the retina. Sub-threshold, threshold, and supra-threshold lesions were observed using optical coherence tomography (OCT), infrared reflectance, red-free reflectance, fluorescence angiography, and autofluorescence imaging modalities at different time points post-exposure. Lesions observed using all imaging modalities, except autofluorescence, were not visible immediately after exposure but did resolve within an hour and grew in size over a 24 hour period. There was a decrease in fundus autofluorescence at exposure sites immediately following exposure that developed into hyper-fluorescence 24-48 hours later. OCT images revealed threshold damage that was localized to the RPE but extended into the neural retina over a 24 hour period. Volumetric representations of the mouse retina were created to visualize the extent of damage within the retina over a 24 hour period. Multimodal imaging provides complementary information regarding damage mechanisms that may be used to quantify the extent of the damage as well as the effectiveness of treatments without need for histology.
Multi-Modality Imaging in the Evaluation and Treatment of Mitral Regurgitation.
Bouchard, Marc-André; Côté-Laroche, Claudia; Beaudoin, Jonathan
2017-10-13
Mitral regurgitation (MR) is frequent and associated with increased mortality and morbidity when severe. It may be caused by intrinsic valvular disease (primary MR) or ventricular deformation (secondary MR). Imaging has a critical role to document the severity, mechanism, and impact of MR on heart function as selected patients with MR may benefit from surgery whereas other will not. In patients planned for a surgical intervention, imaging is also important to select candidates for mitral valve (MV) repair over replacement and to predict surgical success. Although standard transthoracic echocardiography is the first-line modality to evaluate MR, newer imaging modalities like three-dimensional (3D) transesophageal echocardiography, stress echocardiography, cardiac magnetic resonance (CMR), and computed tomography (CT) are emerging and complementary tools for MR assessment. While some of these modalities can provide insight into MR severity, others will help to determine its mechanism. Understanding the advantages and limitations of each imaging modality is important to appreciate their respective role for MR assessment and help to resolve eventual discrepancies between different diagnostic methods. With the increasing use of transcatheter mitral procedures (repair or replacement) for high-surgical-risk patients, multimodality imaging has now become even more important to determine eligibility, preinterventional planning, and periprocedural guidance.
Finger multibiometric cryptosystems: fusion strategy and template security
Peng, Jialiang; Li, Qiong; Abd El-Latif, Ahmed A.; Niu, Xiamu
2014-03-01
We address two critical issues in the design of a finger multibiometric system, i.e., fusion strategy and template security. First, three fusion strategies (feature-level, score-level, and decision-level fusions) with the corresponding template protection technique are proposed as the finger multibiometric cryptosystems to protect multiple finger biometric templates of fingerprint, finger vein, finger knuckle print, and finger shape modalities. Second, we theoretically analyze different fusion strategies for finger multibiometric cryptosystems with respect to their impact on security and recognition accuracy. Finally, the performance of finger multibiometric cryptosystems at different fusion levels is investigated on a merged finger multimodal biometric database. The comparative results suggest that the proposed finger multibiometric cryptosystem at feature-level fusion outperforms other approaches in terms of verification performance and template security.
An acceleration system for Laplacian image fusion based on SoC
Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng
2018-04-01
Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.
A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.
Directory of Open Access Journals (Sweden)
Lu Guo
Full Text Available To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors.A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT and tri-modality (MRI/CT/PET image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV, the average distance between surface and centroid (ADSC, and the local standard deviation (SDlocal. Analysis of COV was also performed to evaluate intra-observer volume variation.The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09 and 0.07(± 0.01 for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05 with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm and patient 3 (from 0.42 cm to 0.36 cm with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00 with the tri-modality method as compared with using the dual-modality method.With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.
Semiparametric score level fusion: Gaussian copula approach
Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
2015-01-01
Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the
Fernández-Gutiérrez, Fabiola; Wolska-Krawczyk, Malgorzata; Buecker, Arno; Houston, J Graeme; Melzer, Andreas
2017-02-01
This study presents a framework for workflow optimisation of multimodal image-guided procedures (MIGP) based on discrete event simulation (DES). A case of a combined X-Ray and magnetic resonance image-guided transarterial chemoembolisation (TACE) is presented to illustrate the application of this method. We used a ranking and selection optimisation algorithm to measure the performance of a number of proposed alternatives to improve a current scenario. A DES model was implemented with detail data collected from 59 TACE procedures and durations of magnetic resonance imaging (MRI) diagnostic procedures usually performed in a common MRI suite. Fourteen alternatives were proposed and assessed to minimise the waiting times and improve workflow. Data analysis observed an average of 20.68 (7.68) min of waiting between angiography and MRI for TACE patients in 71.19% of the cases. Following the optimisation analysis, an alternative was identified to reduce waiting times in angiography suite up to 48.74%. The model helped to understand and detect 'bottlenecks' during multimodal TACE procedures, identifying a better alternative to the current workflow and reducing waiting times. Simulation-based workflow analysis provides a cost-effective way to face some of the challenges of introducing MIGP in clinical radiology, highligthed in this study.
Ultrasound/Magnetic Resonance Image Fusion Guided Lumbosacral Plexus Block – A Clinical Study
DEFF Research Database (Denmark)
Strid, JM; Pedersen, Erik Morre; Søballe, Kjeld
2014-01-01
in a double-blinded randomized controlled trial with crossover design. MR datasets will be acquired and uploaded in an advanced US system (Epiq7, Phillips, Amsterdam, Netherlands). All volunteers will receive SSPS blocks with lidocaine added gadolinium contrast guided by US/MR image fusion and by US one week......Background and aims Ultrasound (US) guided lumbosacral plexus block (Supra Sacral Parallel Shift [SSPS]) offers an alternative to general anaesthesia and perioperative analgesia for hip surgery.1 The complex anatomy of the lumbosacral region hampers the accuracy of the block, but it may be improved...... by guidance of US and magnetic resonance (MR) image fusion and real-time 3D electronic needle tip tracking.2 We aim to estimate the effect and the distribution of lidocaine after SSPS guided by US/MR image fusion compared to SSPS guided by ultrasound. Methods Twenty-four healthy volunteers will be included...
A New Fusion Technique of Remote Sensing Images for Land Use/Cover
Institute of Scientific and Technical Information of China (English)
WU Lian-Xi; SUN Bo; ZHOU Sheng-Lu; HUANG Shu-E; ZHAO Qi-Guo
2004-01-01
In China,accelerating industrialization and urbanization following high-speed economic development and population increases have greatly impacted land use/cover changes,making it imperative to obtain accurate and up to date information on changes so as to evaluate their environmental effects. The major purpose of this study was to develop a new method to fuse lower spatial resolution multispectral satellite images with higher spatial resolution panchromatic ones to assist in land use/cover mapping. An algorithm of a new fusion method known as edge enhancement intensity modulation (EEIM) was proposed to merge two optical image data sets of different spectral ranges. The results showed that the EEIM image was quite similar in color to lower resolution multispectral images,and the fused product was better able to preserve spectral information. Thus,compared to conventional approaches,the spectral distortion of the fused images was markedly reduced. Therefore,the EEIM fusion method could be utilized to fuse remote sensing data from the same or different sensors,including TM images and SPOT5 panchromatic images,providing high quality land use/cover images.
Energy Technology Data Exchange (ETDEWEB)
Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)
2017-01-15
To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.
International Nuclear Information System (INIS)
Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun
2017-01-01
To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making
A generative model for probabilistic label fusion of multimodal data
DEFF Research Database (Denmark)
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen
2012-01-01
The maturity of registration methods, in combination with the increasing processing power of computers, has made multi-atlas segmentation methods practical. The problem of merging the deformed label maps from the atlases is known as label fusion. Even though label fusion has been well studied for...
Noise temperature improvement for magnetic fusion plasma millimeter wave imaging systems
Energy Technology Data Exchange (ETDEWEB)
Lai, J.; Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California at Davis, Davis, California 95616 (United States)
2014-03-15
Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas [B. Tobias et al., Plasma Fusion Res. 6, 2106042 (2011)]. Of particular importance have been microwave electron cyclotron emission imaging and microwave imaging reflectometry systems for imaging T{sub e} and n{sub e} fluctuations. These instruments have employed heterodyne receiver arrays with Schottky diode mixer elements directly connected to individual antennas. Consequently, the noise temperature has been strongly determined by the conversion loss with typical noise temperatures of ∼60 000 K. However, this can be significantly improved by making use of recent advances in Monolithic Microwave Integrated Circuit chip low noise amplifiers to insert a pre-amplifier in front of the Schottky diode mixer element. In a proof-of-principle design at V-Band (50–75 GHz), significant improvement of noise temperature from the current 60 000 K to measured 4000 K has been obtained.
Noise temperature improvement for magnetic fusion plasma millimeter wave imaging systems.
Lai, J; Domier, C W; Luhmann, N C
2014-03-01
Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas [B. Tobias et al., Plasma Fusion Res. 6, 2106042 (2011)]. Of particular importance have been microwave electron cyclotron emission imaging and microwave imaging reflectometry systems for imaging T(e) and n(e) fluctuations. These instruments have employed heterodyne receiver arrays with Schottky diode mixer elements directly connected to individual antennas. Consequently, the noise temperature has been strongly determined by the conversion loss with typical noise temperatures of ~60,000 K. However, this can be significantly improved by making use of recent advances in Monolithic Microwave Integrated Circuit chip low noise amplifiers to insert a pre-amplifier in front of the Schottky diode mixer element. In a proof-of-principle design at V-Band (50-75 GHz), significant improvement of noise temperature from the current 60,000 K to measured 4000 K has been obtained.
Noise temperature improvement for magnetic fusion plasma millimeter wave imaging systems
International Nuclear Information System (INIS)
Lai, J.; Domier, C. W.; Luhmann, N. C.
2014-01-01
Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas [B. Tobias et al., Plasma Fusion Res. 6, 2106042 (2011)]. Of particular importance have been microwave electron cyclotron emission imaging and microwave imaging reflectometry systems for imaging T e and n e fluctuations. These instruments have employed heterodyne receiver arrays with Schottky diode mixer elements directly connected to individual antennas. Consequently, the noise temperature has been strongly determined by the conversion loss with typical noise temperatures of ∼60 000 K. However, this can be significantly improved by making use of recent advances in Monolithic Microwave Integrated Circuit chip low noise amplifiers to insert a pre-amplifier in front of the Schottky diode mixer element. In a proof-of-principle design at V-Band (50–75 GHz), significant improvement of noise temperature from the current 60 000 K to measured 4000 K has been obtained
International Nuclear Information System (INIS)
Kainz, H.
2002-08-01
Aim: to recognize the structures that show an uptake of a 99mTc-labeled octreotide tracer within the orbit and head in patients with thyroid associated eye disease relying on image fusion. Methods: A series of 18 patients presenting the signs and symptoms of thyroid associated eye disease were studied. Functional imaging was done with 99mTc-HYNIC-TOC, a newly in-house developed tracer. Both whole body as well as single photon emission tomographies (SPECT) of the head were obtained in each patient. Parallel to nuclear medicine imaging, morphological imaging was done using either computed tomography or magnetic resonance. Results: By means of image fusion farther more information on the functional status of the patients was obtained. All areas showing an uptake could be anatomically identified, revealing a series of organs that had not yet been consideren in this disease. The organs presenting tracer uptake showed characteristic forms as described below: - eye glass sign: lacrimal gland and lacrimal ducts - scissors sign: eye muscles, rectus sup. and inf. - arch on CT: muscle displacement - Omega sign: tonsils and salivary glands - W- sign: tonsils and salivary glands Conclusions: By means of image fusion it was possible to recognize that a series of organs of the neck and head express somatostatin receptors. We interpret these results as a sign of inflammation of the lacrimal glands, the lacrimal ducts, the cervical lymphatics, the anterior portions of the extra ocular eye muscles and muscles of the posterior cervical region. Somatostatin uptake in these sturctures reflects the prescence of specific receptors which reflect the immuno regulating function of the peptide. (author)
Energy Technology Data Exchange (ETDEWEB)
Kong, Eun Jung; Cho, Ihn Ho [Yeungnam University Hospital, Daegu (Korea, Republic of); Kang, Won Jun [Yonsei University Hospital, Seoul (Korea, Republic of); Kim, Seong Min [Chungnam National University Medical School and Hospital, Daejeon (Korea, Republic of); Won, Kyoung Sook [Keomyung University Dongsan Hospital, Daegu (Korea, Republic of); Lim, Seok Tae [Chonbuk National University Medical School and Hospital, Jeonju (Korea, Republic of); Hwang, Kyung Hoon [Gachon University Gil Hospital, Incheon (Korea, Republic of); Lee, Byeong Il; Bom, Hee Seung [Chonnam National University Medical School and Hospital, Gwangju (Korea, Republic of)
2009-12-15
Integration of the functional information of myocardial perfusion SPECT (MPS) and the morphoanatomical information of coronary CT angiography (CTA) may provide useful additional diagnostic information of the spatial relationship between perfusion defects and coronary stenosis. We studied to know the added value of three dimensional cardiac SPECT/CTA fusion imaging (fusion image) by comparing between fusion image and MPS. Forty-eight patients (M:F=26:22, Age: 63.3{+-}10.4 years) with a reversible perfusion defect on MPS (adenosine stress/rest SPECT with Tc-99m sestamibi or tetrofosmin) and CTA were included. Fusion images were molded and compared with the findings from the MPS. Invasive coronary angiography served as a reference standard for fusion image and MPS. Total 144 coronary arteries in 48 patients were analyzed; Fusion image yielded the sensitivity, specificity, negative and positive predictive value for the detection of hemodynamically significant stenosis per coronary artery 82.5%, 79.3%, 76.7% and 84.6%, respectively. Respective values for the MPS were 68.8%, 70.7%, 62.1% and 76.4%. And fusion image also could detect more multi-vessel disease. Fused three dimensional volume-rendered SPECT/CTA imaging provides intuitive convincing information about hemodynamic relevant lesion and could improved diagnostic accuracy.
Automatic Registration Method for Fusion of ZY-1-02C Satellite Images
Directory of Open Access Journals (Sweden)
Qi Chen
2013-12-01
Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.
Directory of Open Access Journals (Sweden)
Smija M Kurian
Full Text Available Fusarium oxysporum exhibits conidial anastomosis tube (CAT fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging.
Energy Technology Data Exchange (ETDEWEB)
Xu, Baixuan; Guan, Zhiwei; Liu, Changbin; Wang, Ruimin; Yin, Dayi; Zhang, Jinming; Chen, Yingmao; Yao, Shulin; Shao, Mingzhe; Wang, Hui; Tian, Jiahe [Chinese PLA General Hospital, Department of Nuclear Medicine, Beijing (China)
2011-02-15
Dual-tracer, {sup 18}F-fluorodeoxyglucose and {sup 18}F-fluorodeoxythymidine ({sup 18}F-FDG/{sup 18}F-FLT), dual-modality (positron emission tomography and computed tomography, PET/CT) imaging was used in a clinical trial on differentiation of pulmonary nodules. The aims of this trial were to investigate if multimodality imaging is of advantage and to what extent it could benefit the patients in real clinical settings. Seventy-three subjects in whom it was difficult to establish the diagnosis and determine management of their pulmonary lesions were prospectively enrolled in this clinical trial. All subjects underwent {sup 18}F-FDG and {sup 18}F-FLT PET/CT imaging sequentially. The images were interpreted with different strategies as either individual or combined modalities. The pathological or clinical evidence during a follow-up period of more than 22 months served as the standard of truth. The diagnostic performance of each interpretation and their impact on clinical decision making was investigated. {sup 18}F-FLT/{sup 18}F-FDG PET/CT was proven to be of clinical value in improving the diagnostic confidence in 28 lung tumours, 18 tuberculoses and 27 other benign lesions. The ratio between maximum standardized uptake values of {sup 18}F-FLT and {sup 18}F-FDG was found to be of great potential in separating the three subgroups of patients. The advantage could only be obtained with the full use of the multimodality interpretation. Multimodality imaging induced substantial change in clinical management in 31.5% of the study subjects and partial change in another 12.3%. Multimodality imaging using {sup 18}F-FDG/{sup 18}F-FLT PET/CT provided the best diagnostic efficacy and the opportunity for better management in this group of clinically challenging patients with pulmonary lesions. (orig.)
Finkeldey, Markus; Göring, Lena; Schellenberg, Falk; Brenner, Carsten; Gerhardt, Nils C.; Hofmann, Martin
2017-02-01
Microscopy imaging with a single technology is usually restricted to a single contrast mechanism. Multimodal imaging is a promising technique to improve the structural information that could be obtained about a device under test (DUT). Due to the different contrast mechanisms of laser scanning microscopy (LSM), confocal laser scanning microscopy (CLSM) and optical beam induced current microscopy (OBICM), a combination could improve the detection of structures in integrated circuits (ICs) and helps to reveal their layout. While OBIC imaging is sensitive to the changes between differently doped areas and to semiconductor-metal transitions, CLSM imaging is mostly sensitive to changes in absorption and reflection. In this work we present the implementation of OBIC imaging into a CLSM. We show first results using industry standard Atmel microcontrollers (MCUs) with a feature size of about 250nm as DUTs. Analyzing these types of microcontrollers helps to improve in the field of side-channel attacks to find hardware Trojans, possible spots for laser fault attacks and for reverse engineering. For the experimental results the DUT is placed on a custom circuit board that allows us to measure the current while imaging it in our in-house built stage scanning microscope using a near infrared (NIR) laser diode as light source. The DUT is thinned and polished, allowing backside imaging through the Si-substrate. We demonstrate the possibilities using this optical setup by evaluating OBIC, LSM and CLSM images above and below the threshold of the laser source.
Directory of Open Access Journals (Sweden)
Alexander Toet
Full Text Available The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm, near-infrared (NIR, 0.7-1.0μm and long-wave infrared (LWIR, 8-14μm motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer. The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs
International Nuclear Information System (INIS)
Mir, N.; Sohaib, S.A.; Collins, D.; Koh, D.M.
2010-01-01
Full text: Accurate identification of lymph nodes facilities nodal assessment by size, morphological or MR lymphographic criteria. We compared the MR detection of lymph nodes in patients with pelvic cancers using T2-weighted imaging, and fusion of diffusion-weighted imaging (OWl) and T2-weighted imaging. Twenty patients with pelvic tumours underwent 5-mm axial T2-weighted and OWl (b-values 0-750 s/mm 2 ) on a L 5T system. Fusion images of b = 750 s/mm 2 diffusion-weighted MR and T2-weighted images were created. Two radiologists evaluated in consensus the T2-weighted images and fusion images independently. For each image set, the location and diameter of pelvic nodes were recorded, and nodal visibility was scored using a 4-point scale (0-3). Nodal visualisation was compared using Relative to an Identified Distribution (RIDIT) analysis. The mean RIDIT score describes the probability that a randomly selected node will be better visualised relative to the other image set. One hundred fourteen pelvic nodes (mean 5.9 mm; 2-10 mm) were identified on T2-weighted images and 161 nodes (mean 4.3 mm; 2-10 mm) on fusion images. Using fusion images, 47 additional nodes were detected compared with T2-weighted images alone (eight external iliac, 24 inguinal, 12 obturator, two peri-rectal, one presacral). Nodes detected only on fusion images were 2-9 mm (mean 3.7 mm). Nodal visualisation was better using fusion images compared with T2-weighted images (mean RIDIT score 0.689 vs 0.302). Fusion of diffusion-weighted MR with T2-weighted images improves identification of pelvic lymph nodes compared with T2-weighted images alone. The improved nodal identification may aid treatment planning and further nodal characterisation.
Multimodality Cardiac Imaging in a Patient with Kawasaki Disease and Giant Aneurysms
Directory of Open Access Journals (Sweden)
Ranjini Srinivasan
2016-01-01
Full Text Available Kawasaki disease is a well-known cause of acquired cardiac disease in the pediatric and adult population, most prevalent in Japan but also seen commonly in the United States. In the era of intravenous immunoglobulin (IVIG treatment, the morbidity associated with this disease has decreased, but it remains a serious illness. Here we present the case of an adolescent, initially diagnosed with Kawasaki disease as an infant, that progressed to giant aneurysm formation and calcification of the coronary arteries. We review his case and the literature, focusing on the integral role of multimodality imaging in managing Kawasaki disease.
Simplified Multimodal Biometric Identification
Directory of Open Access Journals (Sweden)
Abhijit Shete
2014-03-01
Full Text Available Multibiometric systems are expected to be more reliable than unimodal biometric systems for personal identification due to the presence of multiple, fairly independent pieces of evidence e.g. Unique Identification Project "Aadhaar" of Government of India. In this paper, we present a novel wavelet based technique to perform fusion at the feature level and score level by considering two biometric modalities, face and fingerprint. The results indicate that the proposed technique can lead to substantial improvement in multimodal matching performance. The proposed technique is simple because of no preprocessing of raw biometric traits as well as no feature and score normalization.
Anato-metabolic fusion of PET, CT and MRI images; Anatometabolische Bildfusion von PET, CT und MRT
Energy Technology Data Exchange (ETDEWEB)
Przetak, C.; Baum, R.P.; Niesen, A. [Zentralklinik Bad Berka (Germany). Klinik fuer Nuklearmedizin/PET-Zentrum; Slomka, P. [University of Western Ontario, Toronto (Canada). Health Sciences Centre; Proeschild, A.; Leonhardi, J. [Zentralklinik Bad Berka (Germany). Inst. fuer bildgebende Diagnostik
2000-12-01
The fusion of cross-sectional images - especially in oncology - appears to be a very helpful tool to improve the diagnostic and therapeutic accuracy. Though many advantages exist, image fusion is applied routinely only in a few hospitals. To introduce image fusion as a common procedure, technical and logistical conditions have to be fulfilled which are related to long term archiving of digital data, data transfer and improvement of the available software in terms of usefulness and documentation. The accuracy of coregistration and the quality of image fusion has to be validated by further controlled studies. (orig.) [German] Zur Erhoehung der diagnostischen und therapeutischen Sicherheit ist die Fusion von Schnittbildern verschiedener tomographischer Verfahren insbesondere in der Onkologie sehr hilfreich. Trotz bestehender Vorteile hat die Bildfusion bisher nur in einzelnen Zentren Einzug in die nuklearmedizinische und radiologische Routinediagnostik gefunden. Um die Bildfusion allgemein einsetzen zu koennen, sind bestimmte technische und logistische Voraussetzungen notwendig. Dies betrifft die Langzeitarchivierung von diagitalen Daten, die Moeglichkeiten zur Datenuebertragung und die Weiterentwicklung der verfuegbaren Software, auch was den Bedienkomfort und die Dokumentation anbelangt. Zudem ist es notwendig, die Exaktheit der Koregistrierung und damit die Qualitaet der Bildfusion durch kontrollierte Studien zu validieren. (orig.)
Li, Tianmeng; Hui, Hui; Ma, He; Yang, Xin; Tian, Jie
2018-02-01
Non-invasive imaging technologies, such as magnetic resonance imaging (MRI) and optical multimodality imaging methods, are commonly used for diagnosing and supervising the development of inflammatory bowel disease (IBD). These in vivo imaging methods can provide morphology changes information of IBD in macro-scale. However, it is difficult to investigate the intestinal wall in molecular and cellular level. State-of-art light-sheet and two-photon microscopy have the ability to acquire the changes for IBD in micro-scale. The aim of this work is to evaluate the size of the enterocoel and the thickness of colon wall using both MRI for in vivo imaging, and light-sheet and two-photon microscope for in vitro imaging. C57BL/6 mice were received 3.5% Dextran sodium sulfate (DSS) in the drinking water for 5 days to build IBD model. Mice were imaged with MRI on days 0, 6 to observe colitis progression. After MRI imaging, the mice were sacrificed to take colons for tissue clearing. Then, light-sheet and two-photon microscopies are used for in vitro imaging of the cleared samples. The experimental group showed symptoms of bloody stools, sluggishness and weight loss. It showed that the colon wall was thicker while the enterocoel was narrower compare to control group. The more details are observed using light-sheet and two-photon microscope. It is demonstrated that hybrid of MRI in macro-scale and light-sheet and two-photon microscopy in micro-scale imaging is feasible for colon inflammation diagnosing and supervising.
EndoTOFPET-US: a novel multimodal tool for endoscopy and positron emission tomography
International Nuclear Information System (INIS)
Aubry, N; Fourmigue, J-M; Auffray, E; Mimoun, F B; Doroud, K; Fornaro, G; Frisch, B; Brillouet, N; Courday, P; Bugalho, R; Charbon, E; Charles, O; Damon, C; Cortinovis, D; Gadow, K; Cserkaszky, A; Fischer, J-M; Fürst, B; Gardiazabal, J; Garutti, E
2013-01-01
The EndoTOFPET-US project aims to develop a multimodal detector to foster the development of new biomarkers for prostate and pancreatic tumors. The detector will consist of two main components: an external plate, and a PET extension to an endoscopic ultrasound probe. The external plate is an array of LYSO crystals read out by silicon photomultipliers (SiPM) coupled to an Application Specific Integrated Circuit (ASIC). The internal probe will be an highly integrated and miniaturized detector made of LYSO crystals read out by a fully digital SiPM featuring photosensor elements and digital readout in the same chip. The position and orientation of the two detectors will be tracked with respect to the patient to allow the fusion of the metabolic image from the PET and the anatomic image from the ultrasound probe in the time frame of the medical procedure. The fused information can guide further interventions of the organ, such as biopsy or in vivo confocal microscopy.
International Nuclear Information System (INIS)
Kashimura, Hiroshi; Ogasawara, Kuniaki; Arai, Hiroshi
2008-01-01
A fusion technique for magnetic resonance (MR) angiography and MR imaging was developed to help assess the peritumoral angioarchitecture during surgical planning for meningioma. Three-dimensional time-of-flight (3D-TOF) and 3D-spoiled gradient recalled (SPGR) datasets were obtained from 10 patients with intracranial meningioma, and fused using newly developed volume registration and visualization software. Maximum intensity projection (MIP) images from 3D-TOF MR angiography and axial SPGR MR imaging were displayed at the same time on the monitor. Selecting a vessel on the real-time MIP image indicated the corresponding points on the axial image automatically. Fusion images showed displacement of the anterior cerebral or middle cerebral artery in 7 patients and encasement of the anterior cerebral arteries in I patient, with no relationship between the main arterial trunk and tumor in 2 patients. Fusion of MR angiography and MR imaging can clarify relationships between the intracranial vasculature and meningioma, and may be helpful for surgical planning for meningioma. (author)
International Nuclear Information System (INIS)
Ponomarev, Vladimir; Vider, Jelena; Shavrin, Aleksander; Ageyeva, Ludmila; Tourkova, Vilia; Doubrovin, Michael; Serganova, Inna; Beresten, Tatiana; Ivanova, Anna; Blasberg, Ronald; Balatoni, Julius; Bornmann, William; Gelovani Tjuvajev, Juri
2004-01-01
Two genetic reporter systems were developed for multimodality reporter gene imaging of different molecular-genetic processes using fluorescence, bioluminescence (BLI), and nuclear imaging techniques. The eGFP cDNA was fused at the N-terminus with HSV1-tk cDNA bearing a nuclear export signal from MAPKK (NES-HSV1-tk) or with truncation at the N-terminus of the first 45 amino acids (Δ45HSV1-tk) and with firefly luciferase at the C-terminus. A single fusion protein with three functional subunits is formed following transcription and translation from a single open reading frame. The NES-TGL (NES-TGL) or Δ45HSV1-tk/GFP/luciferase (Δ45-TGL) triple-fusion gene cDNAs were cloned into a MoMLV-based retrovirus, which was used for transduction of U87 human glioma cells. The integrity, fluorescence, bioluminescence, and enzymatic activity of the TGL reporter proteins were assessed in vitro. The predicted molecular weight of the fusion proteins (130 kDa) was confirmed by western blot. The U87-NES-TGL and U87-Δ45-TGL cells had cytoplasmic green fluorescence. The in vitro BLI was 7- and 13-fold higher in U87-NES-TGL and U87-Δ45-TGL cells compared to nontransduced control cells. The Ki of 14 C-FIAU was 0.49±0.02, 0.51±0.03, and 0.003±0.001 ml/min/g in U87-NES-TGL, U87-Δ45-TGL, and wild-type U87 cells, respectively. Multimodality in vivo imaging studies were performed in nu/nu mice bearing multiple s.c. xenografts established from U87-NES-TGL, U87-Δ45-TGL, and wild-type U87 cells. BLI was performed after administration of d-luciferin (150 mg/kg i.v.). Gamma camera or PET imaging was conducted at 2 h after i.v. administration of [ 131 I]FIAU (7.4 MBq/animal) or [ 124 I]FIAU (7.4 MBq/animal), respectively. Whole-body fluorescence imaging was performed in parallel with the BLI and radiotracer imaging studies. In vivo BLI and gamma camera imaging showed specific localization of luminescence and radioactivity to the TGL transduced xenografts with background levels of activity
Large area imaging of hydrogenous materials using fast neutrons from a DD fusion generator
Energy Technology Data Exchange (ETDEWEB)
Cremer, J.T., E-mail: ted@adelphitech.com [Adelphi Technology Inc., 2003 East Bayshore Road, Redwood City, California 94063 (United States); Williams, D.L.; Gary, C.K.; Piestrup, M.A.; Faber, D.R.; Fuller, M.J.; Vainionpaa, J.H.; Apodaca, M. [Adelphi Technology Inc., 2003 East Bayshore Road, Redwood City, California 94063 (United States); Pantell, R.H.; Feinstein, J. [Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States)
2012-05-21
A small-laboratory fast-neutron generator and a large area detector were used to image hydrogen-bearing materials. The overall image resolution of 2.5 mm was determined by a knife-edge measurement. Contact images of objects were obtained in 5-50 min exposures by placing them close to a plastic scintillator at distances of 1.5 to 3.2 m from the neutron source. The generator produces 10{sup 9} n/s from the DD fusion reaction at a small target. The combination of the DD-fusion generator and electronic camera permits both small laboratory and field-portable imaging of hydrogen-rich materials embedded in high density materials.
Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.
Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan
2018-06-15
Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.
Benign familial fleck retina: multimodal imaging including optical coherence tomography angiography.
Garcia, Jose Mauricio Botto de Barros; Isaac, David Leonardo Cruvinel; Sardeiro, Tainara; Aquino, Érika; Avila, Marcos
2017-01-01
This report presents multimodal imaging of a 27-year-old woman diagnosed with benign familial fleck retina (OMIM 228980), an uncommon disorder. Fundus photographs revealed retinal flecks that affected her post-equatorial retina but spared the macular area. Fundus autofluorescence and infrared imaging demonstrated a symmetrical pattern of yellow-white fleck lesions that affected both eyes. Her full-field electroretinogram and electrooculogram were normal. An optical coherence tomography B-scan was performed for both eyes, revealing increased thickness of the retinal pigmented epithelium leading to multiple small pigmented epithelium detachments. The outer retina remained intact in both eyes. Spectral-domain optical coherence tomography angiography with split-spectrum amplitude decorrelation algorithm and 3 × 3 mm structural en face optical coherence tomography did not show macular lesions. Benign familial fleck retina belongs to a heterogenous group of so-called flecked retina syndromes, and should be considered in patients with yellowish-white retinal lesions without involvement of the macula.
Benign familial fleck retina: multimodal imaging including optical coherence tomography angiography
Directory of Open Access Journals (Sweden)
Jose Mauricio Botto de Barros Garcia
Full Text Available ABSTRACT This report presents multimodal imaging of a 27-year-old woman diagnosed with benign familial fleck retina (OMIM 228980, an uncommon disorder. Fundus photographs revealed retinal flecks that affected her post-equatorial retina but spared the macular area. Fundus autofluorescence and infrared imaging demonstrated a symmetrical pattern of yellow-white fleck lesions that affected both eyes. Her full-field electroretinogram and electrooculogram were normal. An optical coherence tomography B-scan was performed for both eyes, revealing increased thickness of the retinal pigmented epithelium leading to multiple small pigmented epithelium detachments. The outer retina remained intact in both eyes. Spectral-domain optical coherence tomography angiography with split-spectrum amplitude decorrelation algorithm and 3 × 3 mm structural en face optical coherence tomography did not show macular lesions. Benign familial fleck retina belongs to a heterogenous group of so-called flecked retina syndromes, and should be considered in patients with yellowish-white retinal lesions without involvement of the macula.
4D XCAT phantom for multimodality imaging research
Energy Technology Data Exchange (ETDEWEB)
Segars, W. P.; Sturgeon, G.; Mendonca, S.; Grimes, Jason; Tsui, B. M. W. [Department of Radiology, Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, 2424 Erwin Road, Hock Plaza, Suite 302, Durham, North Carolina 27705 (United States); Department of Radiology, Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, 2424 Erwin Road, Hock Plaza, Suite 302, Durham, North Carolina 27705 and Department of Biomedical Engineering, University of North Carolina, Chapel Hill, North Carolina 27599 (United States); Department of Radiology, Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, 2424 Erwin Road, Hock Plaza, Suite 302, Durham, North Carolina 27705 (United States); The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, Maryland 21287 (United States)
2010-09-15
Purpose: The authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. Methods: Highly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, ''Basic anatomical and physiological data for use in radiological protection: reference values,'' ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages. Results: As demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research. Conclusions: With the
4D XCAT phantom for multimodality imaging research
International Nuclear Information System (INIS)
Segars, W. P.; Sturgeon, G.; Mendonca, S.; Grimes, Jason; Tsui, B. M. W.
2010-01-01
Purpose: The authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research. Methods: Highly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, ''Basic anatomical and physiological data for use in radiological protection: reference values,'' ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages. Results: As demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research. Conclusions: With the ability to produce
Spatial resolution enhancement of satellite image data using fusion approach
Lestiana, H.; Sukristiyanti
2018-02-01
Object identification using remote sensing data has a problem when the spatial resolution is not in accordance with the object. The fusion approach is one of methods to solve the problem, to improve the object recognition and to increase the objects information by combining data from multiple sensors. The application of fusion image can be used to estimate the environmental component that is needed to monitor in multiple views, such as evapotranspiration estimation, 3D ground-based characterisation, smart city application, urban environments, terrestrial mapping, and water vegetation. Based on fusion application method, the visible object in land area has been easily recognized using the method. The variety of object information in land area has increased the variation of environmental component estimation. The difficulties in recognizing the invisible object like Submarine Groundwater Discharge (SGD), especially in tropical area, might be decreased by the fusion method. The less variation of the object in the sea surface temperature is a challenge to be solved.
Fusion of PET and MRI for Hybrid Imaging
Cho, Zang-Hee; Son, Young-Don; Kim, Young-Bo; Yoo, Seung-Schik
Recently, the development of the fusion PET-MRI system has been actively studied to meet the increasing demand for integrated molecular and anatomical imaging. MRI can provide detailed anatomical information on the brain, such as the locations of gray and white matter, blood vessels, axonal tracts with high resolution, while PET can measure molecular and genetic information, such as glucose metabolism, neurotransmitter-neuroreceptor binding and affinity, protein-protein interactions, and gene trafficking among biological tissues. State-of-the-art MRI systems, such as the 7.0 T whole-body MRI, now can visualize super-fine structures including neuronal bundles in the pons, fine blood vessels (such as lenticulostriate arteries) without invasive contrast agents, in vivo hippocampal substructures, and substantia nigra with excellent image contrast. High-resolution PET, known as High-Resolution Research Tomograph (HRRT), is a brain-dedicated system capable of imaging minute changes of chemicals, such as neurotransmitters and -receptors, with high spatial resolution and sensitivity. The synergistic power of the two, i.e., ultra high-resolution anatomical information offered by a 7.0 T MRI system combined with the high-sensitivity molecular information offered by HRRT-PET, will significantly elevate the level of our current understanding of the human brain, one of the most delicate, complex, and mysterious biological organs. This chapter introduces MRI, PET, and PET-MRI fusion system, and its algorithms are discussed in detail.
MIDA: A Multimodal Imaging-Based Detailed Anatomical Model of the Human Head and Neck.
Directory of Open Access Journals (Sweden)
Maria Ida Iacono
Full Text Available Computational modeling and simulations are increasingly being used to complement experimental testing for analysis of safety and efficacy of medical devices. Multiple voxel- and surface-based whole- and partial-body models have been proposed in the literature, typically with spatial resolution in the range of 1-2 mm and with 10-50 different tissue types resolved. We have developed a multimodal imaging-based detailed anatomical model of the human head and neck, named "MIDA". The model was obtained by integrating three different magnetic resonance imaging (MRI modalities, the parameters of which were tailored to enhance the signals of specific tissues: i structural T1- and T2-weighted MRIs; a specific heavily T2-weighted MRI slab with high nerve contrast optimized to enhance the structures of the ear and eye; ii magnetic resonance angiography (MRA data to image the vasculature, and iii diffusion tensor imaging (DTI to obtain information on anisotropy and fiber orientation. The unique multimodal high-resolution approach allowed resolving 153 structures, including several distinct muscles, bones and skull layers, arteries and veins, nerves, as well as salivary glands. The model offers also a detailed characterization of eyes, ears, and deep brain structures. A special automatic atlas-based segmentation procedure was adopted to include a detailed map of the nuclei of the thalamus and midbrain into the head model. The suitability of the model to simulations involving different numerical methods, discretization approaches, as well as DTI-based tensorial electrical conductivity, was examined in a case-study, in which the electric field was generated by transcranial alternating current stimulation. The voxel- and the surface-based versions of the models are freely available to the scientific community.
Multimodality Imaging Probe for Positron Emission Tomography and Fluorescence Imaging Studies
Directory of Open Access Journals (Sweden)
Suresh K. Pandey
2014-05-01
Full Text Available Our goal is to develop multimodality imaging agents for use in cell tracking studies by positron emission tomography (PET and optical imaging (OI. For this purpose, bovine serum albumin (BSA was complexed with biotin (histologic studies, 5(6- carboxyfluorescein, succinimidyl ester (FAM SE (OI studies, and diethylenetriamine pentaacetic acid (DTPA for chelating gallium 68 (PET studies. For synthesis of BSA-biotin-FAM-DTPA, BSA was coupled to (+-biotin N-hydroxysuccinimide ester (biotin-NHSI. BSA- biotin was treated with DTPA-anhydride and biotin-BSA-DTPA was reacted with FAM. The biotin-BSA-DTPA-FAM was reacted with gallium chloride 3 to 5 mCi eluted from the generator using 0.1 N HCl and was passed through basic resin (AG 11 A8 and 150 mCi (100 μL, pH 7–8 was incubated with 0.1 mg of FAM conjugate (100 μL at room temperature for 15 minutes to give 66Ga-BSA-biotin-DTPA-FAM. A shaved C57 black mouse was injected with FAM conjugate (50 μL at one flank and FAM-68Ga (50 μL, 30 mCi at the other. Immediately after injection, the mouse was placed in a fluorescence imaging system (Kodak In-Vivo F, Bruker Biospin Co., Woodbridge, CT and imaged (Λex: 465 nm, Λem: 535 nm, time: 8 seconds, Xenon Light Source, Kodak. The same mouse was then placed under an Inveon microPET scanner (Siemens Medical Solutions, Knoxville, TN injected (intravenously with 25 μCi of 18F and after a half-hour (to allow sufficient bone uptake was imaged for 30 minutes. Molecular weight determined using matrix-associated laser desorption ionization (MALDI for the BSA sample was 66,485 Da and for biotin-BSA was 67,116 Da, indicating two biotin moieties per BSA molecule; for biotin-BSA-DTPA was 81,584 Da, indicating an average of 30 DTPA moieties per BSA molecule; and for FAM conjugate was 82,383 Da, indicating an average of 1.7 fluorescent moieties per BSA molecule. Fluorescence imaging clearly showed localization of FAM conjugate and FAM-68Ga at respective flanks of the mouse
Multimodal news framing effects
Powell, T.E.
2017-01-01
Visuals in news media play a vital role in framing citizens’ political preferences. Yet, compared to the written word, visual images are undervalued in political communication research. Using framing theory, this thesis redresses the balance by studying the combined, or multimodal, effects of visual
Medium resolution image fusion, does it enhance forest structure assessment
CSIR Research Space (South Africa)
Roberts, JW
2008-07-01
Full Text Available This research explored the potential benefits of fusing optical and Synthetic Aperture Radar (SAR) medium resolution satellite-borne sensor data for forest structural assessment. Image fusion was applied as a means of retaining disparate data...
HALO: a reconfigurable image enhancement and multisensor fusion system
Wu, F.; Hickman, D. L.; Parker, Steve J.
2014-06-01
Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.
International Nuclear Information System (INIS)
Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena
2013-01-01
Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.
International Nuclear Information System (INIS)
Nakajo, Kazuya; Tatsumi, Mitsuaki; Inoue, Atsuo
2010-01-01
We compared the diagnostic accuracy of fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) and PET/magnetic resonance imaging (MRI) fusion images for gynecological malignancies. A total of 31 patients with gynecological malignancies were enrolled. FDG-PET images were fused to CT, T1- and T2-weighted images (T1WI, T2WI). PET-MRI fusion was performed semiautomatically. We performed three types of evaluation to demonstrate the usefulness of PET/MRI fusion images in comparison with that of inline PET/CT as follows: depiction of the uterus and the ovarian lesions on CT or MRI mapping images (first evaluation); additional information for lesion localization with PET and mapping images (second evaluation); and the image quality of fusion on interpretation (third evaluation). For the first evaluation, the score for T2WI (4.68±0.65) was significantly higher than that for CT (3.54±1.02) or T1WI (3.71±0.97) (P<0.01). For the second evaluation, the scores for the localization of FDG accumulation showing that T2WI (2.74±0.57) provided significantly more additional information for the identification of anatomical sites of FDG accumulation than did CT (2.06±0.68) or T1WI (2.23±0.61) (P<0.01). For the third evaluation, the three-point rating scale for the patient group as a whole demonstrated that PET/T2WI (2.72±0.54) localized the lesion significantly more convincingly than PET/CT (2.23±0.50) or PET/T1WI (2.29±0.53) (P<0.01). PET/T2WI fusion images are superior for the detection and localization of gynecological malignancies. (author)
Multimodal imaging of the human knee down to the cellular level
Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.
2017-06-01
Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.
Biodistribution and tumor imaging of an anti-CEA single-chain antibody-albumin fusion protein
International Nuclear Information System (INIS)
Yazaki, Paul J.; Kassa, Thewodros; Cheung, Chia-wei; Crow, Desiree M.; Sherman, Mark A.; Bading, James R.; Anderson, Anne-Line J.; Colcher, David; Raubitschek, Andrew
2008-01-01
Albumin fusion proteins have demonstrated the ability to prolong the in vivo half-life of small therapeutic proteins/peptides in the circulation and thereby potentially increase their therapeutic efficacy. To evaluate if this format can be employed for antibody-based imaging, an anticarcinoembryonic antigen (CEA) single-chain antibody(scFv)-albumin fusion protein was designed, expressed and radiolabeled for biodistribution and imaging studies in athymic mice bearing human colorectal carcinoma LS-174T xenografts. The [ 125 I]-T84.66 fusion protein demonstrated rapid tumor uptake of 12.3% injected dose per gram (ID/g) at 4 h that reached a plateau of 22.7% ID/g by 18 h. This was a dramatic increase in tumor uptake compared to 4.9% ID/g for the scFv alone. The radiometal [ 111 In]-labeled version resulted in higher tumor uptake, 37.2% ID/g at 18 h, which persisted at the tumor site with tumor: blood ratios reaching 18:1 and with normal tissues showing limited uptake. Based on these favorable imaging properties, a pilot [ 64 Cu]-positron emission tomography imaging study was performed with promising results. The anti-CEA T84.66 scFv-albumin fusion protein demonstrates highly specific tumor uptake that is comparable to cognate recombinant antibody fragments. The radiometal-labeled version, which shows lower normal tissue accumulation than these recombinant antibodies, provides a promising and novel platform for antibody-based imaging agents
Detection of relationships among multi-modal brain imaging meta-features via information flow.
Miller, Robyn L; Vergara, Victor M; Calhoun, Vince D
2018-01-15
Neuroscientists and clinical researchers are awash in data from an ever-growing number of imaging and other bio-behavioral modalities. This flow of brain imaging data, taken under resting and various task conditions, combines with available cognitive measures, behavioral information, genetic data plus other potentially salient biomedical and environmental information to create a rich but diffuse data landscape. The conditions being studied with brain imaging data are often extremely complex and it is common for researchers to employ more than one imaging, behavioral or biological data modality (e.g., genetics) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features or modalities. We propose an intuitive framework based on conditional probabilities for understanding information exchange between features in what we are calling a feature meta-space; that is, a space consisting of many individual featurae spaces. Features can have any dimension and can be drawn from any data source or modality. No a priori assumptions are made about the functional form (e.g., linear, polynomial, exponential) of captured inter-feature relationships. We demonstrate the framework's ability to identify relationships between disparate features of varying dimensionality by applying it to a large multi-site, multi-modal clinical dataset, balance between schizophrenia patients and controls. In our application it exposes both expected (previously observed) relationships, and novel relationships rarely considered investigated by clinical researchers. To the best of our knowledge there is not presently a comparably efficient way to capture relationships of indeterminate functional form between features of arbitrary dimension and type. We are introducing this method as an initial foray into a space that remains relatively underpopulated. The framework we propose is
Predictive simulations of radio frequency heated plasmas of Tore Supra using the Multi-Mode model
International Nuclear Information System (INIS)
Voitsekhovitch, Irina; Bateman, Glenn; Kritz, Arnold H.; Pankin, Alexei
2002-01-01
Multichannel integrated predictive simulations using the Multi-Mode transport model are carried out for radio frequency heated Tore Supra tokamak discharges in which helium is the primary ion component. Lower hybrid heated discharges in which the total current is driven noninductively [X. Litaudon et al., Plasma Phys. Controlled Fusion 43, 677 (2001)] and a discharge with ion cyclotron radio frequency heating of the hydrogen minority ions [G. T. Hoang et al., Nucl. Fusion 38, 117 (1998)] are simulated. The simulations of these discharges represent the first test of the Multi-Mode model in helium plasmas with dominant electron heating. Also for the first time, the particle transport in Tore Supra discharges is computed and the density profiles are predicted self-consistently with other transport channels. It is found in these simulations that the anomalous transport driven by trapped electron mode turbulence is dominant compared to the transport driven by the ion temperature gradient turbulence. The feature of the Multi-Mode model to calculate the impurity transport self-consistently with other transport channels is used in this study to predict the influence of carbon impurity influx on the discharge evolution
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Aspergillus infection monitored by multimodal imaging in a rat model.
Pluhacek, Tomas; Petrik, Milos; Luptakova, Dominika; Benada, Oldrich; Palyzova, Andrea; Lemr, Karel; Havlicek, Vladimir
2016-06-01
Although myriads of experimental approaches have been published in the field of fungal infection diagnostics, interestingly, in 21st century there is no satisfactory early noninvasive tool for Aspergillus diagnostics with good sensitivity and specificity. In this work, we for the first time described the fungal burden in rat lungs by multimodal imaging approach. The Aspergillus infection was monitored by positron emission tomography and light microscopy employing modified Grocott's methenamine silver staining and eosin counterstaining. Laser ablation inductively coupled plasma mass spectrometry imaging has revealed a dramatic iron increase in fungi-affected areas, which can be presumably attributed to microbial siderophores. Quantitative elemental data were inferred from matrix-matched standards prepared from rat lungs. The iron, silver, and gold MS images collected with variable laser foci revealed that particularly silver or gold can be used as excellent elements useful for sensitively tracking the Aspergillus infection. The limit of detection was determined for both (107) Ag and (197) Au as 0.03 μg/g (5 μm laser focus). The selective incorporation of (107) Ag and (197) Au into fungal cell bodies and low background noise from both elements were confirmed by energy dispersive X-ray scattering utilizing the submicron lateral resolving power of scanning electron microscopy. The low limits of detection and quantitation of both gold and silver make ICP-MS imaging monitoring a viable alternative to standard optical evaluation used in current clinical settings. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Ernst Michael Jung
Full Text Available OBJECTIVE: Assessing the feasibility and efficiency of interventions using ultrasound (US volume navigation (V Nav with real time needle tracking and image fusion with contrast enhanced (ce CT, MRI or US. METHODS: First an in vitro study on a liver phantom with CT data image fusion was performed, involving the puncture of a 10 mm lesion in a depth of 5 cm performed by 15 examiners with US guided freehand technique vs. V Nav for the purpose of time optimization. Then 23 patients underwent ultrasound-navigated biopsies or interventions using V Nav image fusion of live ultrasound with ceCT, ceMRI or CEUS, which were acquired before the intervention. A CEUS data set was acquired in all patients. Image fusion was established for CEUS and CT or CEUS and MRI using anatomical landmarks in the area of the targeted lesion. The definition of a virtual biopsy line with navigational axes targeting the lesion was achieved by the usage of sterile trocar with a magnetic sensor embedded in its distal tip employing a dedicated navigation software for real time needle tracking. RESULTS: The in vitro study showed significantly less time needed for the simulated interventions in all examiners when V Nav was used (p<0.05. In the study involving patients, in all 10 biopsies of suspect lesions of the liver a histological confirmation was achieved. We also used V Nav for a breast biopsy (intraductal carcinoma, for a biopsy of the abdominal wall (metastasis of ovarial carcinoma and for radiofrequency ablations (4 ablations. In 8 cases of inflammatory abdominal lesions 9 percutaneous drainages were successfully inserted. CONCLUSION: Percutaneous biopsies and drainages, even of small lesions involving complex access pathways, can be accomplished with a high success rate by using 3D real time image fusion together with real time needle tracking.
Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K.; Wong, Stephen T. C.
2016-01-01
In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938
Spinal focal lesion detection in multiple myeloma using multimodal image features
Fränzle, Andrea; Hillengass, Jens; Bendl, Rolf
2015-03-01
Multiple myeloma is a tumor disease in the bone marrow that affects the skeleton systemically, i.e. multiple lesions can occur in different sites in the skeleton. To quantify overall tumor mass for determining degree of disease and for analysis of therapy response, volumetry of all lesions is needed. Since the large amount of lesions in one patient impedes manual segmentation of all lesions, quantification of overall tumor volume is not possible until now. Therefore development of automatic lesion detection and segmentation methods is necessary. Since focal tumors in multiple myeloma show different characteristics in different modalities (changes in bone structure in CT images, hypointensity in T1 weighted MR images and hyperintensity in T2 weighted MR images), multimodal image analysis is necessary for the detection of focal tumors. In this paper a pattern recognition approach is presented that identifies focal lesions in lumbar vertebrae based on features from T1 and T2 weighted MR images. Image voxels within bone are classified using random forests based on plain intensities and intensity value derived features (maximum, minimum, mean, median) in a 5 x 5 neighborhood around a voxel from both T1 and T2 weighted MR images. A test data sample of lesions in 8 lumbar vertebrae from 4 multiple myeloma patients can be classified at an accuracy of 95% (using a leave-one-patient-out test). The approach provides a reasonable delineation of the example lesions. This is an important step towards automatic tumor volume quantification in multiple myeloma.
Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging
Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel
2010-02-01
We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help
High-resolution multimodal clinical multiphoton tomography of skin
König, Karsten
2011-03-01
This review focuses on multimodal multiphoton tomography based on near infrared femtosecond lasers. Clinical multiphoton tomographs for 3D high-resolution in vivo imaging have been placed into the market several years ago. The second generation of this Prism-Award winning High-Tech skin imaging tool (MPTflex) was introduced in 2010. The same year, the world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph. In particular, non-fluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen has been imaged with submicron resolution in patients suffering from psoriasis. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution wide-field systems such as ultrasound, optoacoustical, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer, optimization of treatment strategies, and cosmetic research including long-term testing of sunscreen nanoparticles as well as anti-aging products.
Hosoya, Hitomi; Dobroff, Andrey S; Driessen, Wouter H P; Cristini, Vittorio; Brinker, Lina M; Staquicini, Fernanda I; Cardó-Vila, Marina; D'Angelo, Sara; Ferrara, Fortunato; Proneth, Bettina; Lin, Yu-Shen; Dunphy, Darren R; Dogra, Prashant; Melancon, Marites P; Stafford, R Jason; Miyazono, Kohei; Gelovani, Juri G; Kataoka, Kazunori; Brinker, C Jeffrey; Sidman, Richard L; Arap, Wadih; Pasqualini, Renata
2016-02-16
A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared, thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. These results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.
Hanson, Jeffrey A.; McLaughlin, Keith L.; Sereno, Thomas J.
2011-06-01
We have developed a flexible, target-driven, multi-modal, physics-based fusion architecture that efficiently searches sensor detections for targets and rejects clutter while controlling the combinatoric problems that commonly arise in datadriven fusion systems. The informational constraints imposed by long lifetime requirements make systems vulnerable to false alarms. We demonstrate that our data fusion system significantly reduces false alarms while maintaining high sensitivity to threats. In addition, mission goals can vary substantially in terms of targets-of-interest, required characterization, acceptable latency, and false alarm rates. Our fusion architecture provides the flexibility to match these trade-offs with mission requirements unlike many conventional systems that require significant modifications for each new mission. We illustrate our data fusion performance with case studies that span many of the potential mission scenarios including border surveillance, base security, and infrastructure protection. In these studies, we deployed multi-modal sensor nodes - including geophones, magnetometers, accelerometers and PIR sensors - with low-power processing algorithms and low-bandwidth wireless mesh networking to create networks capable of multi-year operation. The results show our data fusion architecture maintains high sensitivities while suppressing most false alarms for a variety of environments and targets.
Development of magneto-plasmonic nanoparticles for multimodal image-guided therapy to the brain.
Tomitaka, Asahi; Arami, Hamed; Raymond, Andrea; Yndart, Adriana; Kaushik, Ajeet; Jayant, Rahul Dev; Takemura, Yasushi; Cai, Yong; Toborek, Michal; Nair, Madhavan
2017-01-05
Magneto-plasmonic nanoparticles are one of the emerging multi-functional materials in the field of nanomedicine. Their potential for targeting and multi-modal imaging is highly attractive. In this study, magnetic core/gold shell (MNP@Au) magneto-plasmonic nanoparticles were synthesized by citrate reduction of Au ions on magnetic nanoparticle seeds. Hydrodynamic size and optical properties of magneto-plasmonic nanoparticles synthesized with the variation of Au ions and reducing agent concentrations were evaluated. The synthesized magneto-plasmonic nanoparticles exhibited superparamagnetic properties, and their magnetic properties contributed to the concentration-dependent contrast in magnetic resonance imaging (MRI). The imaging contrast from the gold shell part of the magneto-plasmonic nanoparticles was also confirmed by X-ray computed tomography (CT). The transmigration study of the magneto-plasmonic nanoparticles using an in vitro blood-brain barrier (BBB) model proved enhanced transmigration efficiency without disrupting the integrity of the BBB, and showed potential to be used for brain diseases and neurological disorders.
Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F
2018-05-08
Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.
Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang
2017-12-01
In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.