WorldWideScience

Sample records for multi-modal image fusion

  1. Multi-Modality Medical Image Fusion Based on Wavelet Analysis and Quality Evaluation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Multi-modality medical image fusion has more and more important applications in medical image analysisand understanding. In this paper, we develop and apply a multi-resolution method based on wavelet pyramid to fusemedical images from different modalities such as PET-MRI and CT-MRI. In particular, we evaluate the different fusionresults when applying different selection rules and obtain optimum combination of fusion parameters.

  2. Extended feature-fusion guidelines to improve image-based multi-modal biometrics

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-09-01

    Full Text Available The feature-level, unlike the match score-level, lacks multi-modal fusion guidelines. This work demonstrates a practical approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint...

  3. Multi-Modality Registration And Fusion Of Medical Image Data

    International Nuclear Information System (INIS)

    Kassak, P.; Vencko, D.; Cerovsky, I.

    2008-01-01

    Digitalisation of health care providing facilities allows US to maximize the usage of digital data from one patient obtained by various modalities. Complex view on to the problem can be achieved from the site of morphology as well as functionality. Multi-modal registration and fusion of medical image data is one of the examples that provides improved insight and allows more precise approach and treatment. (author)

  4. Feature-Fusion Guidelines for Image-Based Multi-Modal Biometric Fusion

    Directory of Open Access Journals (Sweden)

    Dane Brown

    2017-07-01

    Full Text Available The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.

  5. Drug-related webpages classification based on multi-modal local decision fusion

    Science.gov (United States)

    Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin

    2018-03-01

    In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.

  6. Effective Fusion of Multi-Modal Remote Sensing Data in a Fully Convolutional Network for Semantic Labeling

    Directory of Open Access Journals (Sweden)

    Wenkai Zhang

    2017-12-01

    Full Text Available In recent years, Fully Convolutional Networks (FCN have led to a great improvement of semantic labeling for various applications including multi-modal remote sensing data. Although different fusion strategies have been reported for multi-modal data, there is no in-depth study of the reasons of performance limits. For example, it is unclear, why an early fusion of multi-modal data in FCN does not lead to a satisfying result. In this paper, we investigate the contribution of individual layers inside FCN and propose an effective fusion strategy for the semantic labeling of color or infrared imagery together with elevation (e.g., Digital Surface Models. The sensitivity and contribution of layers concerning classes and multi-modal data are quantified by recall and descent rate of recall in a multi-resolution model. The contribution of different modalities to the pixel-wise prediction is analyzed explaining the reason of the poor performance caused by the plain concatenation of different modalities. Finally, based on the analysis an optimized scheme for the fusion of layers with image and elevation information into a single FCN model is derived. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset (infrared and RGB imagery as well as elevation and the Potsdam dataset (RGB imagery and elevation. Comprehensive evaluations demonstrate the potential of the proposed approach.

  7. [A preliminary research on multi-source medical image fusion].

    Science.gov (United States)

    Kang, Yuanyuan; Li, Bin; Tian, Lianfang; Mao, Zongyuan

    2009-04-01

    Multi-modal medical image fusion has important value in clinical diagnosis and treatment. In this paper, the multi-resolution analysis of Daubechies 9/7 Biorthogonal Wavelet Transform is introduced for anatomical and functional image fusion, then a new fusion algorithm with the combination of local standard deviation and energy as texture measurement is presented. At last, a set of quantitative evaluation criteria is given. Experiments show that both anatomical and metabolism information can be obtained effectively, and both the edge and texture features can be reserved successfully. The presented algorithm is more effective than the traditional algorithms.

  8. Visual tracking for multi-modality computer-assisted image guidance

    Science.gov (United States)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  9. Energy Logic (EL): a novel fusion engine of multi-modality multi-agent data/information fusion for intelligent surveillance systems

    Science.gov (United States)

    Rababaah, Haroun; Shirkhodaie, Amir

    2009-04-01

    The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.

  10. Modeling decision-making in single- and multi-modal medical images

    Science.gov (United States)

    Canosa, R. L.; Baum, K. G.

    2009-02-01

    This research introduces a mode-specific model of visual saliency that can be used to highlight likely lesion locations and potential errors (false positives and false negatives) in single-mode PET and MRI images and multi-modal fused PET/MRI images. Fused-modality digital images are a relatively recent technological improvement in medical imaging; therefore, a novel component of this research is to characterize the perceptual response to these fused images. Three different fusion techniques were compared to single-mode displays in terms of observer error rates using synthetic human brain images generated from an anthropomorphic phantom. An eye-tracking experiment was performed with naÃve (non-radiologist) observers who viewed the single- and multi-modal images. The eye-tracking data allowed the errors to be classified into four categories: false positives, search errors (false negatives never fixated), recognition errors (false negatives fixated less than 350 milliseconds), and decision errors (false negatives fixated greater than 350 milliseconds). A saliency model consisting of a set of differentially weighted low-level feature maps is derived from the known error and ground truth locations extracted from a subset of the test images for each modality. The saliency model shows that lesion and error locations attract visual attention according to low-level image features such as color, luminance, and texture.

  11. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    Science.gov (United States)

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  12. Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach.

    Science.gov (United States)

    Mohammadi-Nejad, Ali-Reza; Hossein-Zadeh, Gholam-Ali; Soltanian-Zadeh, Hamid

    2017-07-01

    Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer's disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects with a p-value of less than 1×10 -6 . Furthermore, we have depicted the brain mapping of functional areas that are most correlated with the anatomical changes in AD patients relative to HC subjects.

  13. Quantitative multi-modal NDT data analysis

    International Nuclear Information System (INIS)

    Heideklang, René; Shokouhi, Parisa

    2014-01-01

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity

  14. A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.

    Directory of Open Access Journals (Sweden)

    Lu Guo

    Full Text Available To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors.A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT and tri-modality (MRI/CT/PET image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV, the average distance between surface and centroid (ADSC, and the local standard deviation (SDlocal. Analysis of COV was also performed to evaluate intra-observer volume variation.The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09 and 0.07(± 0.01 for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05 with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm and patient 3 (from 0.42 cm to 0.36 cm with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00 with the tri-modality method as compared with using the dual-modality method.With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.

  15. Compositional-prior-guided image reconstruction algorithm for multi-modality imaging

    Science.gov (United States)

    Fang, Qianqian; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.

    2010-01-01

    The development of effective multi-modality imaging methods typically requires an efficient information fusion model, particularly when combining structural images with a complementary imaging modality that provides functional information. We propose a composition-based image segmentation method for X-ray digital breast tomosynthesis (DBT) and a structural-prior-guided image reconstruction for a combined DBT and diffuse optical tomography (DOT) breast imaging system. Using the 3D DBT images from 31 clinically measured healthy breasts, we create an empirical relationship between the X-ray intensities for adipose and fibroglandular tissue. We use this relationship to then segment another 58 healthy breast DBT images from 29 subjects into compositional maps of different tissue types. For each breast, we build a weighted-graph in the compositional space and construct a regularization matrix to incorporate the structural priors into a finite-element-based DOT image reconstruction. Use of the compositional priors enables us to fuse tissue anatomy into optical images with less restriction than when using a binary segmentation. This allows us to recover the image contrast captured by DOT but not by DBT. We show that it is possible to fine-tune the strength of the structural priors by changing a single regularization parameter. By estimating the optical properties for adipose and fibroglandular tissue using the proposed algorithm, we found the results are comparable or superior to those estimated with expert-segmentations, but does not involve the time-consuming manual selection of regions-of-interest. PMID:21258460

  16. Feature-based Alignment of Volumetric Multi-modal Images

    Science.gov (United States)

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  17. Extended depth of field integral imaging using multi-focus fusion

    Science.gov (United States)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  18. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    Science.gov (United States)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  19. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    Science.gov (United States)

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. The optimal algorithm for Multi-source RS image fusion.

    Science.gov (United States)

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  1. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    Science.gov (United States)

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  2. [Research progress of multi-model medical image fusion and recognition].

    Science.gov (United States)

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  3. Multi-sensor image fusion and its applications

    CERN Document Server

    Blum, Rick S

    2005-01-01

    Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,

  4. CT, MRI and PET image fusion using the ProSoma 3D simulation software

    International Nuclear Information System (INIS)

    Dalah, E.; Bradley, D.A.; Nisbet, A.; Reise, S.

    2008-01-01

    Full text: Multi-modality imaging is involved in almost all oncology applications focusing on the extent of disease and target volume delineation. Commercial image fusion software packages are becoming available but require comprehensive evaluation to ensure reliability of fusion and the underpinning registration algorithm particularly for radiotherapy. The present work seeks to assess such accuracy for a number of available registration methods provided by the commercial package ProSoma. A NEMA body phantom was used in evaluating CT, MR and PET images. In addition, discussion is provided concerning the choice and geometry of fiducial markers in phantom studies and the effect of window-level on target size, in particular in regard to the application of multi modality imaging in treatment planning. In general, the accuracy of fusion of multi-modality images was within 0.5-1.5 mm of actual feature diameters and < 2 ml volume of actual values, particularly in CT images. (author)

  5. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    Science.gov (United States)

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  6. Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation

    OpenAIRE

    Pelapur, Rengarajan; Prasath, Surya; Palaniappan, Kannappan

    2014-01-01

    We are building a computerized image analysis system for Dura Mater vascular network from fluorescence microscopy images. We propose a system that couples a multi-focus image fusion module with a robust adaptive filtering based segmentation. The robust adaptive filtering scheme handles noise without destroying small structures, and the multi focal image fusion considerably improves the overall segmentation quality by integrating information from multiple images. Based on the segmenta...

  7. Image fusion techniques in permanent seed implantation

    Directory of Open Access Journals (Sweden)

    Alfredo Polo

    2010-10-01

    Full Text Available Over the last twenty years major software and hardware developments in brachytherapy treatment planning, intraoperative navigation and dose delivery have been made. Image-guided brachytherapy has emerged as the ultimate conformal radiation therapy, allowing precise dose deposition on small volumes under direct image visualization. In thisprocess imaging plays a central role and novel imaging techniques are being developed (PET, MRI-MRS and power Doppler US imaging are among them, creating a new paradigm (dose-guided brachytherapy, where imaging is used to map the exact coordinates of the tumour cells, and to guide applicator insertion to the correct position. Each of these modalities has limitations providing all of the physical and geometric information required for the brachytherapy workflow.Therefore, image fusion can be used as a solution in order to take full advantage of the information from each modality in treatment planning, intraoperative navigation, dose delivery, verification and follow-up of interstitial irradiation.Image fusion, understood as the visualization of any morphological volume (i.e. US, CT, MRI together with an additional second morpholo gical volume (i.e. CT, MRI or functional dataset (functional MRI, SPECT, PET, is a well known method for treatment planning, verification and follow-up of interstitial irradiation. The term image fusion is used when multiple patient image datasets are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality taken at different moments (multi-temporalapproach, or by combining information from multiple modalities. Quality means that the fused images should provide additional information to the brachythe rapy process (diagnosis and staging, treatment planning, intraoperative imaging, treatment delivery and follow-up that cannot be obtained in other ways. In this review I will focus on the role of

  8. Multi-focus image fusion with the all convolutional neural network

    Science.gov (United States)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  9. Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging

    International Nuclear Information System (INIS)

    Joshi, Bishnu P.; Wang, Thomas D.

    2010-01-01

    Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research

  10. On the Multi-Modal Object Tracking and Image Fusion Using Unsupervised Deep Learning Methodologies

    Science.gov (United States)

    LaHaye, N.; Ott, J.; Garay, M. J.; El-Askary, H. M.; Linstead, E.

    2017-12-01

    The number of different modalities of remote-sensors has been on the rise, resulting in large datasets with different complexity levels. Such complex datasets can provide valuable information separately, yet there is a bigger value in having a comprehensive view of them combined. As such, hidden information can be deduced through applying data mining techniques on the fused data. The curse of dimensionality of such fused data, due to the potentially vast dimension space, hinders our ability to have deep understanding of them. This is because each dataset requires a user to have instrument-specific and dataset-specific knowledge for optimum and meaningful usage. Once a user decides to use multiple datasets together, deeper understanding of translating and combining these datasets in a correct and effective manner is needed. Although there exists data centric techniques, generic automated methodologies that can potentially solve this problem completely don't exist. Here we are developing a system that aims to gain a detailed understanding of different data modalities. Such system will provide an analysis environment that gives the user useful feedback and can aid in research tasks. In our current work, we show the initial outputs our system implementation that leverages unsupervised deep learning techniques so not to burden the user with the task of labeling input data, while still allowing for a detailed machine understanding of the data. Our goal is to be able to track objects, like cloud systems or aerosols, across different image-like data-modalities. The proposed system is flexible, scalable and robust to understand complex likenesses within multi-modal data in a similar spatio-temporal range, and also to be able to co-register and fuse these images when needed.

  11. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  12. Noncontact Sleep Study by Multi-Modal Sensor Fusion

    Directory of Open Access Journals (Sweden)

    Ku-young Chung

    2017-07-01

    Full Text Available Polysomnography (PSG is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner.

  13. Evaluation of registration strategies for multi-modality images of rat brain slices

    International Nuclear Information System (INIS)

    Palm, Christoph; Vieten, Andrea; Salber, Dagmar; Pietrzyk, Uwe

    2009-01-01

    In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.

  14. Facile Fabrication of Animal-Specific Positioning Molds For Multi-modality Molecular Imaging

    International Nuclear Information System (INIS)

    Park, Jeong Chan; Oh, Ji Eun; Woo, Seung Tae

    2008-01-01

    Recently multi-modal imaging system has become widely adopted in molecular imaging. We tried to fabricate animal-specific positioning molds for PET/MR fusion imaging using easily available molding clay and rapid foam. The animal-specific positioning molds provide immobilization and reproducible positioning of small animal. Herein, we have compared fiber-based molding clay with rapid foam in fabricating the molds of experimental animal. The round bottomed-acrylic frame, which fitted into microPET gantry, was prepared at first. The experimental mice was anesthetized and placed on the mold for positioning. Rapid foam and fiber-based clay were used to fabricate the mold. In case of both rapid foam and the clay, the experimental animal needs to be pushed down smoothly into the mold for positioning. However, after the mouse was removed, the fabricated clay needed to be dried completely at 60 .deg. C in oven overnight for hardening. Four sealed pipe tips containing [ 18 F]FDG solution were used as fiduciary markers. After injection of [ 18 F]FDG via tail vein, microPET scanning was performed. Successively, MRI scanning was followed in the same animal. Animal-specific positioning molds were fabricated using rapid foam and fiber-based molding clay for multimodality imaging. Functional and anatomical images were obtained with microPET and MRI, respectively. The fused PET/MR images were obtained using freely available AMIDE program. Animal-specific molds were successfully prepared using easily available rapid foam, molding clay and disposable pipet tips. Thanks to animal-specific molds, fusion images of PET and MR were co-registered with negligible misalignment

  15. Multi-modality molecular imaging: pre-clinical laboratory configuration

    Science.gov (United States)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  16. TU-AB-202-11: Tumor Segmentation by Fusion of Multi-Tracer PET Images Using Copula Based Statistical Methods

    International Nuclear Information System (INIS)

    Lapuyade-Lahorgue, J; Ruan, S; Li, H; Vera, P

    2016-01-01

    Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model is used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume

  17. TU-C-BRD-01: Image Guided SBRT I: Multi-Modality 4D Imaging

    International Nuclear Information System (INIS)

    Cai, J; Mageras, G; Pan, T

    2014-01-01

    Motion management is one of the critical technical challenges for radiation therapy. 4D imaging has been rapidly adopted as essential tool to assess organ motion associated with respiratory breathing. A variety of 4D imaging techniques have been developed and are currently under development based on different imaging modalities such as CT, MRI, PET, and CBCT. Each modality provides specific and complementary information about organ and tumor respiratory motion. Effective use of each different technique or combined use of different techniques can introduce a comprehensive management of tumor motion. Specifically, these techniques have afforded tremendous opportunities to better define and delineate tumor volumes, more accurately perform patient positioning, and effectively apply highly conformal therapy techniques such as IMRT and SBRT. Successful implementation requires good understanding of not only each technique, including unique features, limitations, artifacts, imaging acquisition and process, but also how to systematically apply the information obtained from different imaging modalities using proper tools such as deformable image registration. Furthermore, it is important to understand the differences in the effects of breathing variation between different imaging modalities. A comprehensive motion management strategy using multi-modality 4D imaging has shown promise in improving patient care, but at the same time faces significant challenges. This session will focuses on the current status and advances in imaging respiration-induced organ motion with different imaging modalities: 4D-CT, 4D-MRI, 4D-PET, and 4D-CBCT/DTS. Learning Objectives: Understand the need and role of multimodality 4D imaging in radiation therapy. Understand the underlying physics behind each 4D imaging technique. Recognize the advantages and limitations of each 4D imaging technique

  18. Multi-modal brain imaging software for guiding invasive treatment of epilepsy

    NARCIS (Netherlands)

    Ossenblok, P.P.W.; Marien, S.; Meesters, S.P.L.; Florack, L.M.J.; Hofman, P.; Schijns, O.E.M.G.; Colon, A.

    2017-01-01

    Purpose: The surgical treatment of patients with complex epilepsies is changing more and more from open, invasive surgery towards minimally invasive, image guided treatment. Multi-modal brain imaging procedures are developed to delineate preoperatively the region of the brain which is responsible

  19. Fusion of different modalities of imaging the fist

    International Nuclear Information System (INIS)

    Verdenet, J.; Garbuio, P.; Runge, M.; Cardot, J.C.

    1997-01-01

    The standard radiographical pictures are not able always to bring out the fracture of one of the fist bones. In an early study it was shown that 40% of patients presenting a suspicion of fracture and in which the radio- image was normal, have had a fracture confirmed with quantification by MRI and scintigraphy. The last one does not allow to specify the localization and consequently we developed a code to fusion entirely automatically the radiologic image and the scintigraphic image using no external marker. The code has been installed on a PC and uses the Matlab environment. Starting from the histogram processing the contours are individualized on the interpolated radio- and scinti-images. For matching there are 3 freedom degrees: one of rotation and 2 of translation (in x and y axes). The internal axes of the forearm was chosen to effect the rotation and translation. The forehand thickness, identical for each modality, allows to match properly the images. We have obtained an anatomic image on which the contour and the hyper-fixating zones of the scintigraphy are added. On a set of 100 examinations we observed 38 fractures while the difference between a fracture of the scaphoid and of another fist bone is confirmed in 93% of cases

  20. Fusion Imaging for Procedural Guidance.

    Science.gov (United States)

    Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J

    2018-05-01

    The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  1. ADMultiImg: a novel missing modality transfer learning based CAD system for diagnosis of MCI due to AD using incomplete multi-modality imaging data

    Science.gov (United States)

    Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing

    2018-02-01

    Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.

  2. Detecting Pedestrian Flocks by Fusion of Multi-Modal Sensors in Mobile Phones

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun; Wirz, Martin; Roggen, Daniel

    2012-01-01

    derived from multiple sensor modalities of modern smartphones. Automatic detection of flocks has several important applications, including evacuation management and socially aware computing. The novelty of this paper is, firstly, to use data fusion techniques to combine several sensor modalities (WiFi...

  3. WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals

    Energy Technology Data Exchange (ETDEWEB)

    Tsui, B. [Johns Hopkins University (United States)

    2016-06-15

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffers from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly

  4. WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals

    International Nuclear Information System (INIS)

    Tsui, B.

    2016-01-01

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffers from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly

  5. Modality prediction of biomedical literature images using multimodal feature representation

    Directory of Open Access Journals (Sweden)

    Pelka, Obioma

    2016-08-01

    Full Text Available This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.

  6. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y [University of Kansas Hospital, Kansas City, KS (United States); Fullerton, G; Goins, B [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States)

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  7. Manifold regularized multi-task feature selection for multi-modality classification in Alzheimer's disease.

    Science.gov (United States)

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2013-01-01

    Accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment, MCI), is very important for possible delay and early treatment of the disease. Recently, multi-modality methods have been used for fusing information from multiple different and complementary imaging and non-imaging modalities. Although there are a number of existing multi-modality methods, few of them have addressed the problem of joint identification of disease-related brain regions from multi-modality data for classification. In this paper, we proposed a manifold regularized multi-task learning framework to jointly select features from multi-modality data. Specifically, we formulate the multi-modality classification as a multi-task learning framework, where each task focuses on the classification based on each modality. In order to capture the intrinsic relatedness among multiple tasks (i.e., modalities), we adopted a group sparsity regularizer, which ensures only a small number of features to be selected jointly. In addition, we introduced a new manifold based Laplacian regularization term to preserve the geometric distribution of original data from each task, which can lead to the selection of more discriminative features. Furthermore, we extend our method to the semi-supervised setting, which is very important since the acquisition of a large set of labeled data (i.e., diagnosis of disease) is usually expensive and time-consuming, while the collection of unlabeled data is relatively much easier. To validate our method, we have performed extensive evaluations on the baseline Magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our experimental results demonstrate the effectiveness of the proposed method.

  8. Spinal fusion-hardware construct: Basic concepts and imaging review

    Science.gov (United States)

    Nouh, Mohamed Ragab

    2012-01-01

    The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979

  9. Multi-detector CT imaging in the postoperative orthopedic patient with metal hardware

    International Nuclear Information System (INIS)

    Vande Berg, Bruno; Malghem, Jacques; Maldague, Baudouin; Lecouvet, Frederic

    2006-01-01

    Multi-detector CT imaging (MDCT) becomes routine imaging modality in the assessment of the postoperative orthopedic patients with metallic instrumentation that degrades image quality at MR imaging. This article reviews the physical basis and CT appearance of such metal-related artifacts. It also addresses the clinical value of MDCT in postoperative orthopedic patients with emphasis on fracture healing, spinal fusion or arthrodesis, and joint replacement. MDCT imaging shows limitations in the assessment of the bone marrow cavity and of the soft tissues for which MR imaging remains the imaging modality of choice despite metal-related anatomic distortions and signal alteration

  10. Multimodality Image Fusion and Planning and Dose Delivery for Radiation Therapy

    International Nuclear Information System (INIS)

    Saw, Cheng B.; Chen Hungcheng; Beatty, Ron E.; Wagner, Henry

    2008-01-01

    Image-guided radiation therapy (IGRT) relies on the quality of fused images to yield accurate and reproducible patient setup prior to dose delivery. The registration of 2 image datasets can be characterized as hardware-based or software-based image fusion. Hardware-based image fusion is performed by hybrid scanners that combine 2 distinct medical imaging modalities such as positron emission tomography (PET) and computed tomography (CT) into a single device. In hybrid scanners, the patient maintains the same position during both studies making the fusion of image data sets simple. However, it cannot perform temporal image registration where image datasets are acquired at different times. On the other hand, software-based image fusion technique can merge image datasets taken at different times or with different medical imaging modalities. Software-based image fusion can be performed either manually, using landmarks, or automatically. In the automatic image fusion method, the best fit is evaluated using mutual information coefficient. Manual image fusion is typically performed at dose planning and for patient setup prior to dose delivery for IGRT. The fusion of orthogonal live radiographic images taken prior to dose delivery to digitally reconstructed radiographs will be presented. Although manual image fusion has been routinely used, the use of fiducial markers has shortened the fusion time. Automated image fusion should be possible for IGRT because the image datasets are derived basically from the same imaging modality, resulting in further shortening the fusion time. The advantages and limitations of both hardware-based and software-based image fusion methodologies are discussed

  11. Research on multi-source image fusion technology in haze environment

    Science.gov (United States)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  12. Manifold Regularized Multi-Task Feature Selection for Multi-Modality Classification in Alzheimer’s Disease

    Science.gov (United States)

    Jie, Biao; Cheng, Bo

    2014-01-01

    Accurate diagnosis of Alzheimer’s disease (AD), as well as its pro-dromal stage (i.e., mild cognitive impairment, MCI), is very important for possible delay and early treatment of the disease. Recently, multi-modality methods have been used for fusing information from multiple different and complementary imaging and non-imaging modalities. Although there are a number of existing multi-modality methods, few of them have addressed the problem of joint identification of disease-related brain regions from multi-modality data for classification. In this paper, we proposed a manifold regularized multi-task learning framework to jointly select features from multi-modality data. Specifically, we formulate the multi-modality classification as a multi-task learning framework, where each task focuses on the classification based on each modality. In order to capture the intrinsic relatedness among multiple tasks (i.e., modalities), we adopted a group sparsity regularizer, which ensures only a small number of features to be selected jointly. In addition, we introduced a new manifold based Laplacian regularization term to preserve the geometric distribution of original data from each task, which can lead to the selection of more discriminative features. Furthermore, we extend our method to the semi-supervised setting, which is very important since the acquisition of a large set of labeled data (i.e., diagnosis of disease) is usually expensive and time-consuming, while the collection of unlabeled data is relatively much easier. To validate our method, we have performed extensive evaluations on the baseline Magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Our experimental results demonstrate the effectiveness of the proposed method. PMID:24505676

  13. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    Science.gov (United States)

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  14. vECTlab-A fully integrated multi-modality Monte Carlo simulation framework for the radiological imaging sciences

    International Nuclear Information System (INIS)

    Peter, Joerg; Semmler, Wolfhard

    2007-01-01

    Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems

  15. Detection of relationships among multi-modal brain imaging meta-features via information flow.

    Science.gov (United States)

    Miller, Robyn L; Vergara, Victor M; Calhoun, Vince D

    2018-01-15

    Neuroscientists and clinical researchers are awash in data from an ever-growing number of imaging and other bio-behavioral modalities. This flow of brain imaging data, taken under resting and various task conditions, combines with available cognitive measures, behavioral information, genetic data plus other potentially salient biomedical and environmental information to create a rich but diffuse data landscape. The conditions being studied with brain imaging data are often extremely complex and it is common for researchers to employ more than one imaging, behavioral or biological data modality (e.g., genetics) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features or modalities. We propose an intuitive framework based on conditional probabilities for understanding information exchange between features in what we are calling a feature meta-space; that is, a space consisting of many individual featurae spaces. Features can have any dimension and can be drawn from any data source or modality. No a priori assumptions are made about the functional form (e.g., linear, polynomial, exponential) of captured inter-feature relationships. We demonstrate the framework's ability to identify relationships between disparate features of varying dimensionality by applying it to a large multi-site, multi-modal clinical dataset, balance between schizophrenia patients and controls. In our application it exposes both expected (previously observed) relationships, and novel relationships rarely considered investigated by clinical researchers. To the best of our knowledge there is not presently a comparably efficient way to capture relationships of indeterminate functional form between features of arbitrary dimension and type. We are introducing this method as an initial foray into a space that remains relatively underpopulated. The framework we propose is

  16. A multi-modality concept for radiotherapy planning with imaging techniques

    International Nuclear Information System (INIS)

    Schultze, J.

    1993-01-01

    The reported multi-modality concept of radiotherapy planning in the LAN can be realised in any hospital with standard equipment, although in some cases by way of auxiliary configurations. A software is currently developed as a tool for reducing the entire planning work. The heart of any radiotherapy planning is the therapy simulator, which has to be abreast with the requirements of modern radiotherapy. Integration of tomograpy, digitalisation, and electronic data processing has added important modalities to therapy planning which allow more precise target volume definition, and better biophysical planning. This is what is needed in order to achieve well differentiated radiotherapy for treatment of the manifold tumors, and the quality standards expected by the supervisory quality assurance regime and the population. At present, the CT data still are transferred indirect, on storage media, to the EDP processing system of the radiotherapy planning system. Based on the tomographic slices given by the imaging data, the contours and technical problem solutions are derived automatically, either for multi-field radiotherapy or moving field irradiation, depending on the anatomy or the targets to be protected from ionizing radiation. (orig./VHE) [de

  17. Accuracy and reproducibility of tumor positioning during prolonged and multi-modality animal imaging studies

    International Nuclear Information System (INIS)

    Zhang Mutian; Huang Minming; Le, Carl; Zanzonico, Pat B; Ling, C Clifton; Koutcher, Jason A; Humm, John L; Claus, Filip; Kolbert, Katherine S; Martin, Kyle

    2008-01-01

    Dedicated small-animal imaging devices, e.g. positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI) scanners, are being increasingly used for translational molecular imaging studies. The objective of this work was to determine the positional accuracy and precision with which tumors in situ can be reliably and reproducibly imaged on dedicated small-animal imaging equipment. We designed, fabricated and tested a custom rodent cradle with a stereotactic template to facilitate registration among image sets. To quantify tumor motion during our small-animal imaging protocols, 'gold standard' multi-modality point markers were inserted into tumor masses on the hind limbs of rats. Three types of imaging examination were then performed with the animals continuously anesthetized and immobilized: (i) consecutive microPET and MR images of tumor xenografts in which the animals remained in the same scanner for 2 h duration, (ii) multi-modality imaging studies in which the animals were transported between distant imaging devices and (iii) serial microPET scans in which the animals were repositioned in the same scanner for subsequent images. Our results showed that the animal tumor moved by less than 0.2-0.3 mm over a continuous 2 h microPET or MR imaging session. The process of transporting the animal between instruments introduced additional errors of ∼0.2 mm. In serial animal imaging studies, the positioning reproducibility within ∼0.8 mm could be obtained.

  18. Advances in multi-sensor data fusion: algorithms and applications.

    Science.gov (United States)

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.

  19. Joint Multi-Focus Fusion and Bayer ImageRestoration

    Institute of Scientific and Technical Information of China (English)

    Ling Guo; Bin Yang; Chao Yang

    2015-01-01

    In this paper, a joint multifocus image fusion and Bayer pattern image restoration algorithm for raw images of single-sensor colorimaging devices is proposed. Different from traditional fusion schemes, the raw Bayer pattern images are fused before colorrestoration. Therefore, the Bayer image restoration operation is only performed one time. Thus, the proposed algorithm is moreefficient than traditional fusion schemes. In detail, a clarity measurement of Bayer pattern image is defined for raw Bayer patternimages, and the fusion operator is performed on superpixels which provide powerful grouping cues of local image feature. Theraw images are merged with refined weight map to get the fused Bayer pattern image, which is restored by the demosaicingalgorithm to get the full resolution color image. Experimental results demonstrate that the proposed algorithm can obtain betterfused results with more natural appearance and fewer artifacts than the traditional algorithms.

  20. Progressive multi-atlas label fusion by dictionary evolution.

    Science.gov (United States)

    Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang

    2017-02-01

    Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Solving the problem of imaging resolution: stochastic multi-scale image fusion

    Science.gov (United States)

    Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril

    2016-04-01

    Structural features of porous materials define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, gas exchange between biologically active soil root zone and atmosphere, etc.) and solute transport. To characterize soil and rock microstructure X-ray microtomography is extremely useful. However, as any other imaging technique, this one also has a significant drawback - a trade-off between sample size and resolution. The latter is a significant problem for multi-scale complex structures, especially such as soils and carbonates. Other imaging techniques, for example, SEM/FIB-SEM or X-ray macrotomography can be helpful in obtaining higher resolution or wider field of view. The ultimate goal is to create a single dataset containing information from all scales or to characterize such multi-scale structure. In this contribution we demonstrate a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images representing macro, micro and nanoscale spatial information on porous media structure. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Potential practical applications of this method are abundant in soil science, hydrology and petroleum engineering, as well as other geosciences. This work was partially supported by RSF grant 14-17-00658 (X-ray microtomography study of shale

  2. Prediction of the microsurgical window for skull-base tumors by advanced three-dimensional multi-fusion volumetric imaging

    International Nuclear Information System (INIS)

    Oishi, Makoto; Fukuda, Masafumi; Saito, Akihiko; Hiraishi, Tetsuya; Fujii, Yukihiko; Ishida, Go

    2011-01-01

    The surgery of skull base tumors (SBTs) is difficult due to the complex and narrow surgical window that is restricted by the cranium and important structures. The utility of three-dimensional multi-fusion volumetric imaging (3-D MFVI) for visualizing the predicted window for SBTs was evaluated. Presurgical simulation using 3-D MFVI was performed in 32 patients with SBTs. Imaging data were collected from computed tomography, magnetic resonance imaging, and digital subtraction angiography. Skull data was processed to imitate actual bone resection and integrated with various structures extracted from appropriate imaging modalities by image-analyzing software. The simulated views were compared with the views obtained during surgery. All craniotomies and bone resections except opening of the acoustic canal in 2 patients were performed as simulated. The simulated window allowed observation of the expected microsurgical anatomies including tumors, vasculatures, and cranial nerves, through the predicted operative window. We could not achieve the planned tumor removal in only 3 patients. 3-D MFVI afforded high quality images of the relevant microsurgical anatomies during the surgery of SBTs. The intraoperative deja-vu effect of the simulation increased the confidence of the surgeon in the planned surgical procedures. (author)

  3. The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna

    NARCIS (Netherlands)

    Weimar Acerbi, F.; Clevers, J.G.P.W.; Schaepman, M.E.

    2006-01-01

    Multi-sensor image fusion using the wavelet approach provides a conceptual framework for the improvement of the spatial resolution with minimal distortion of the spectral content of the source image. This paper assesses whether images with a large ratio of spatial resolution can be fused, and

  4. Image fusion in x-ray differential phase-contrast imaging

    Science.gov (United States)

    Haas, W.; Polyanskaya, M.; Bayer, F.; Gödel, K.; Hofmann, H.; Rieger, J.; Ritter, A.; Weber, T.; Wucherer, L.; Durst, J.; Michel, T.; Anton, G.; Hornegger, J.

    2012-02-01

    Phase-contrast imaging is a novel modality in the field of medical X-ray imaging. The pioneer method is the grating-based interferometry which has no special requirements to the X-ray source and object size. Furthermore, it provides three different types of information of an investigated object simultaneously - absorption, differential phase-contrast and dark-field images. Differential phase-contrast and dark-field images represent a completely new information which has not yet been investigated and studied in context of medical imaging. In order to introduce phase-contrast imaging as a new modality into medical environment the resulting information about the object has to be correctly interpreted. The three output images reflect different properties of the same object the main challenge is to combine and visualize these data in such a way that it diminish the information explosion and reduce the complexity of its interpretation. This paper presents an intuitive image fusion approach which allows to operate with grating-based phase-contrast images. It combines information of the three different images and provides a single image. The approach is implemented in a fusion framework which is aimed to support physicians in study and analysis. The framework provides the user with an intuitive graphical user interface allowing to control the fusion process. The example given in this work shows the functionality of the proposed method and the great potential of phase-contrast imaging in medical practice.

  5. Histopathology in 3D: From three-dimensional reconstruction to multi-stain and multi-modal analysis

    Directory of Open Access Journals (Sweden)

    Derek Magee

    2015-01-01

    Full Text Available Light microscopy applied to the domain of histopathology has traditionally been a two-dimensional imaging modality. Several authors, including the authors of this work, have extended the use of digital microscopy to three dimensions by stacking digital images of serial sections using image-based registration. In this paper, we give an overview of our approach, and of extensions to the approach to register multi-modal data sets such as sets of interleaved histopathology sections with different stains, and sets of histopathology images to radiology volumes with very different appearance. Our approach involves transforming dissimilar images into a multi-channel representation derived from co-occurrence statistics between roughly aligned images.

  6. The elementary fusion modalities of osteoclasts

    DEFF Research Database (Denmark)

    Søe, Kent; Hobolt-Pedersen, Anne Sofie; Delaisse, Jean Marie

    2015-01-01

    , are not known for the osteoclast. Here we show that osteoclast fusion partners are characterized by differences in mobility, nuclearity, and differentiation level. Our demonstration was based on time-laps videos of human osteoclast preparations from three donors where 656 fusion events were analyzed. Fusions......The last step of the osteoclast differentiation process is cell fusion. Most efforts to understand the fusion mechanism have focused on the identification of molecules involved in the fusion process. Surprisingly, the basic fusion modalities, which are well known for fusion of other cell types...... between a mobile and an immobile partner were most frequent (62%), while fusion between two mobile (26%) or two immobile partners (12%) was less frequent (p fusion partner contained more nuclei than the mobile one (p

  7. Novelty detection of foreign objects in food using multi-modal X-ray imaging

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur; Emerson, Monica Jane; Clemmensen, Line Katrine Harder

    2016-01-01

    In this paper we demonstrate a method for novelty detection of foreign objects in food products using grating-based multimodal X-ray imaging. With this imaging technique three modalities are available with pixel correspondence, enhancing organic materials such as wood chips, insects and soft...... plastics not detectable by conventional X-ray absorption radiography. We conduct experiments, where several food products are imaged with common foreign objects typically found in the food processing industry. To evaluate the benefit from using this multi-contrast X-ray technique over conventional X......-ray absorption imaging, a novelty detection scheme based on well known image- and statistical analysis techniques is proposed. The results show that the presented method gives superior recognition results and highlights the advantage of grating-based imaging....

  8. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    Science.gov (United States)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  9. MINERVA - a multi-modal radiation treatment planning system

    Energy Technology Data Exchange (ETDEWEB)

    Wemple, C.A. E-mail: cew@enel.gov; Wessol, D.E.; Nigg, D.W.; Cogliati, J.J.; Milvich, M.L.; Frederickson, C.; Perkins, M.; Harkin, G.J

    2004-11-01

    Researchers at the Idaho National Engineering and Environmental Laboratory and Montana State University have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system. This system can be used for planning and analyzing several radiotherapy modalities, either singly or combined, using common modality independent image and geometry construction and dose reporting and guiding. It employs an integrated, lightweight plugin architecture to accommodate multi-modal treatment planning using standard interface components. The MINERVA design also facilitates the future integration of improved planning technologies. The code is being developed with the Java Virtual Machine for interoperability. A full computation path has been established for molecular targeted radiotherapy treatment planning, with the associated transport plugin developed by researchers at the Lawrence Livermore National Laboratory. Development of the neutron transport plugin module is proceeding rapidly, with completion expected later this year. Future development efforts will include development of deformable registration methods, improved segmentation methods for patient model definition, and three-dimensional visualization of the patient images, geometry, and dose data. Transport and source plugins will be created for additional treatment modalities, including brachytherapy, external beam proton radiotherapy, and the EGSnrc/BEAMnrc codes for external beam photon and electron radiotherapy.

  10. Remote sensing image fusion in the context of Digital Earth

    International Nuclear Information System (INIS)

    Pohl, C

    2014-01-01

    The increase in the number of operational Earth observation satellites gives remote sensing image fusion a new boost. As a powerful tool to integrate images from different sensors it enables multi-scale, multi-temporal and multi-source information extraction. Image fusion aims at providing results that cannot be obtained from a single data source alone. Instead it enables feature and information mining of higher reliability and availability. The process required to prepare remote sensing images for image fusion comprises most of the necessary steps to feed the database of Digital Earth. The virtual representation of the planet uses data and information that is referenced and corrected to suit interpretation and decision-making. The same pre-requisite is valid for image fusion, the outcome of which can directly flow into a geographical information system. The assessment and description of the quality of the results remains critical. Depending on the application and information to be extracted from multi-source images different approaches are necessary. This paper describes the process of image fusion based on a fusion and classification experiment, explains the necessary quality measures involved and shows with this example which criteria have to be considered if the results of image fusion are going to be used in Digital Earth

  11. Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.

  12. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    Directory of Open Access Journals (Sweden)

    Zhiqin Zhu

    2017-02-01

    Full Text Available In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different groups based on geometric similarities. The key information of each image-patch group is extracted by principle component analysis (PCA to build dictionary. According to the constructed dictionary, image patches are converted to sparse coefficients by simultaneous orthogonal matching pursuit (SOMP algorithm for representing the source multi-focus images. At last the sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Due to the limitation of microscope, the fluorescence image cannot be fully focused. The proposed multi-focus image fusion solution is applied to fluorescence imaging area for generating all-in-focus images. The comparison experimentation results confirm the feasibility and effectiveness of the proposed multi-focus image fusion solution.

  13. Quantitative image fusion in infrared radiometry

    Science.gov (United States)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  14. Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.

    Science.gov (United States)

    Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping

    2018-03-23

    Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging

  15. Multi-modality image reconstruction for dual-head small-animal PET

    International Nuclear Information System (INIS)

    Huang, Chang-Han; Chou, Cheng-Ying

    2015-01-01

    The hybrid positron emission tomography/computed tomography (PET/CT) or positron emission tomography/magnetic resonance imaging (PET/MRI) has become routine practice in clinics. The applications of multi-modality imaging can also benefit research advances. Consequently, dedicated small-imaging system like dual-head small-animal PET (DHAPET) that possesses the advantages of high detection sensitivity and high resolution can exploit the structural information from CT or MRI. It should be noted that the special detector arrangement in DHAPET leads to severe data truncation, thereby degrading the image quality. We proposed to take advantage of anatomical priors and total variation (TV) minimization methods to reconstruct PET activity distribution form incomplete measurement data. The objective is to solve the penalized least-squares function consisted of data fidelity term, TV norm and medium root priors. In this work, we employed the splitting-based fast iterative shrinkage/thresholding algorithm to split smooth and non-smooth functions in the convex optimization problems. Our simulations studies validated that the images reconstructed by use of the proposed method can outperform those obtained by use of conventional expectation maximization algorithms or that without considering the anatomical prior information. Additionally, the convergence rate is also accelerated.

  16. Development of comprehensive image processing technique for differential diagnosis of liver disease by using multi-modality images. Pixel-based cross-correlation method using a profile

    International Nuclear Information System (INIS)

    Inoue, Akira; Okura, Yasuhiko; Akiyama, Mitoshi; Ishida, Takayuki; Kawashita, Ikuo; Ito, Katsuyoshi; Matsunaga, Naofumi; Sanada, Taizo

    2009-01-01

    Imaging techniques such as high magnetic field imaging and multidetector-row CT have been markedly improved recently. The final image-reading systems easily produce more than a thousand diagnostic images per patient. Therefore, we developed a comprehensive cross-correlation processing technique using multi-modality images, in order to decrease the considerable time and effort involved in the interpretation of a radiogram (multi-formatted display and/or stack display method, etc). In this scheme, the criteria of an attending radiologist for the differential diagnosis of liver cyst, hemangioma of liver, hepatocellular carcinoma, and metastatic liver cancer on magnetic resonance images with various sequences and CT images with and without contrast enhancement employ a cross-correlation coefficient. Using a one-dimensional cross-correlation method, comprehensive image processing could be also adapted for various artifacts (some depending on modality imaging, and some on patients), which may be encountered at the clinical scene. This comprehensive image-processing technique could assist radiologists in the differential diagnosis of liver diseases. (author)

  17. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  18. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    Directory of Open Access Journals (Sweden)

    Alexander Toet

    Full Text Available The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm, near-infrared (NIR, 0.7-1.0μm and long-wave infrared (LWIR, 8-14μm motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer. The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs

  19. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    Science.gov (United States)

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  20. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    Science.gov (United States)

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Spatio-Temporal Series Remote Sensing Image Prediction Based on Multi-Dictionary Bayesian Fusion

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-11-01

    Full Text Available Contradictions in spatial resolution and temporal coverage emerge from earth observation remote sensing images due to limitations in technology and cost. Therefore, how to combine remote sensing images with low spatial yet high temporal resolution as well as those with high spatial yet low temporal resolution to construct images with both high spatial resolution and high temporal coverage has become an important problem called spatio-temporal fusion problem in both research and practice. A Multi-Dictionary Bayesian Spatio-Temporal Reflectance Fusion Model (MDBFM has been proposed in this paper. First, multiple dictionaries from regions of different classes are trained. Second, a Bayesian framework is constructed to solve the dictionary selection problem. A pixel-dictionary likehood function and a dictionary-dictionary prior function are constructed under the Bayesian framework. Third, remote sensing images before and after the middle moment are combined to predict images at the middle moment. Diverse shapes and textures information is learned from different landscapes in multi-dictionary learning to help dictionaries capture the distinctions between regions. The Bayesian framework makes full use of the priori information while the input image is classified. The experiments with one simulated dataset and two satellite datasets validate that the MDBFM is highly effective in both subjective and objective evaluation indexes. The results of MDBFM show more precise details and have a higher similarity with real images when dealing with both type changes and phenology changes.

  2. Visible and NIR image fusion using weight-map-guided Laplacian ...

    Indian Academy of Sciences (India)

    Ashish V Vanmali

    fusion perspective, instead of the conventional haze imaging model. The proposed ... Image dehazing; Laplacian–Gaussian pyramid; multi-resolution fusion; visible–NIR image fusion; weight map. 1. .... Tan's [8] work is based on two assumptions: first, images ... responding colour image, since NIR can penetrate through.

  3. A unified framework for cross-modality multi-atlas segmentation of brain MRI

    DEFF Research Database (Denmark)

    Eugenio Iglesias, Juan; Rory Sabuncu, Mert; Van Leemput, Koen

    2013-01-01

    on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing...

  4. Investigations of image fusion

    Science.gov (United States)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D

  5. Role of the multi-modality image archival and communication system in nuclear medicine

    International Nuclear Information System (INIS)

    Bela Kari; Adam Mester; Erno Mako; Zoltan Gyorfi; Bela Mihalik; Zsolt; Hegyi

    2004-01-01

    Various non-invasive imaging systems produce increasing amount of diagnostic images day by day in digital format. The direct consequence of this tendency places electronic archives and image transfers in spotlight. Moreover, the digital image archives may support any other activities like simultaneous displaying of multi-modality images, telediagnostics, on-line consultation, construction of standard databases for dedicated organs by regional and/or country wide (e.g. myocardial scintigraphy, mammography, etc....) in order to obtain much more exact diagnosis as well as to support education and training. Our institute started similar research and developing activities few years ago, resulting the construction of our PACS systems -MEDISA LINUX Debian and eRAD ImageMedical TM LINUX Red Hat- together with the telecommunication part. Mass storage unit of PACS is based on hard drives connecting in RAID with l.2Tbyte capacity. The on-line telecommunication system consists of an ISDN Multi-Media System (MMS) and Internet based independent units. MMS was dedicated mainly for on-line teleconferencing and consultation by the simultaneously transferred morphological and functional images obtaining from the central archives by DICOM or any other allowable image formats. MMS has been created as a part and requirements of an EU research project - RETRANSPLANT -. The central archives -PACS- can be accessed by DICOM 3.0 protocol on Internet surface through well maintained and secure access rights. Displaying and post-processing of any retrieved images on individual workstations are supported by eRAD ImageMedical TM PracticeBuilder1-2-3 (Window based) image manager with its unique supports and services. The 'real engine' of PracticeBuilder is Ver.5.0 or newer Internet Explorer. The unique feature of PracticelBuilder1-2-3 is the extremely fast patient and image access from the archives even from very 'far distance' (through continents), due to the exceptional image communication

  6. Remote Sensing Image Fusion Based on the Combination Grey Absolute Correlation Degree and IHS Transform

    Directory of Open Access Journals (Sweden)

    Hui LIN

    2014-12-01

    Full Text Available An improved fusion algorithm for multi-source remote sensing images with high spatial resolution and multi-spectral capacity is proposed based on traditional IHS fusion and grey correlation analysis. Firstly, grey absolute correlation degree is used to discriminate non-edge pixels and edge pixels in high-spatial resolution images, by which the weight of intensity component is identified in order to combine it with high-spatial resolution image. Therefore, image fusion is achieved using IHS inverse transform. The proposed method is applied to ETM+ multi-spectral images and panchromatic image, and Quickbird’s multi-spectral images and panchromatic image respectively. The experiments prove that the fusion method proposed in the paper can efficiently preserve spectral information of the original multi-spectral images while enhancing spatial resolution greatly. By comparison and analysis, the proposed fusion algorithm is better than traditional IHS fusion and fusion method based on grey correlation analysis and IHS transform.

  7. Radiomic biomarkers from PET/CT multi-modality fusion images for the prediction of immunotherapy response in advanced non-small cell lung cancer patients

    Science.gov (United States)

    Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James

    2018-02-01

    Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.

  8. Cross-modal face recognition using multi-matcher face scores

    Science.gov (United States)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  9. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.

    Science.gov (United States)

    Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D

    2016-02-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.

  10. A Probabilistic, Non-parametric Framework for Inter-modality Label Fusion

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-01-01

    Multi-atlas techniques are commonplace in medical image segmentation due to their high performance and ease of implementation. Locally weighting the contributions from the different atlases in the label fusion process can improve the quality of the segmentation. However, how to define these weights...

  11. Clinical use of digital retrospective image fusion of CT, MRI, FDG-PET and SPECT - fields of indications and results

    International Nuclear Information System (INIS)

    Lemke, A.J.; Niehues, S.M.; Amthauer, H.; Felix, R.; Rohlfing, T.; Hosten, N.

    2004-01-01

    Purpose: To evaluate the feasibility and the clinical benefits of retrospective digital image fusion (PET, SPECT, CT and MRI). Materials and methods: In a prospective study, a total of 273 image fusions were performed and evaluated. The underlying image acquisitions (CT, MRI, SPECT and PET) were performed in a way appropriate for the respective clinical question and anatomical region. Image fusion was executed with a software program developed during this study. The results of the image fusion procedure were evaluated in terms of technical feasibility, clinical objective, and therapeutic impact. Results: The most frequent combinations of modalities were CT/PET (n = 156) and MRI/PET (n = 59), followed by MRI/SPECT (n = 28), CT/SPECT (n = 22) and CT/MRI (n = 8). The clinical questions included following regions (more than one region per case possible): neurocranium (n = 42), neck (n = 13), lung and mediastinum (n = 24), abdomen (n = 181), and pelvis (n = 65). In 92.6% of all cases (n = 253), image fusion was technically successful. Image fusion was able to improve sensitivity and specificity of the single modality, or to add important diagnostic information. Image fusion was problematic in cases of different body positions between the two imaging modalities or different positions of mobile organs. In 37.9% of the cases, image fusion added clinically relevant information compared to the single modality. Conclusion: For clinical questions concerning liver, pancreas, rectum, neck, or neurocranium, image fusion is a reliable method suitable for routine clinical application. Organ motion still limits its feasibility and routine use in other areas (e.g., thorax). (orig.)

  12. Two Phase Non-Rigid Multi-Modal Image Registration Using Weber Local Descriptor-Based Similarity Metrics and Normalized Mutual Information

    Directory of Open Access Journals (Sweden)

    Feng Yang

    2013-06-01

    Full Text Available Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI and sum of squared differences (SSD cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD based similarity metrics with the normalized mutual information (NMI using the diffeomorphic free-form deformation (FFD model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD or weighted structural similarity (wldWSSIM. Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI, the SSD on entropy images (ESSD and the ESSD-NMI in terms of registration accuracy and computation efficiency.

  13. Image fusion tool: Validation by phantom measurements

    International Nuclear Information System (INIS)

    Zander, A.; Geworski, L.; Richter, M.; Ivancevic, V.; Munz, D.L.; Muehler, M.; Ditt, H.

    2002-01-01

    Aim: Validation of a new image fusion tool with regard to handling, application in a clinical environment and fusion precision under different acquisition and registration settings. Methods: The image fusion tool investigated allows fusion of imaging modalities such as PET, CT, MRI. In order to investigate fusion precision, PET and MRI measurements were performed using a cylinder and a body contour-shaped phantom. The cylinder phantom (diameter and length 20 cm each) contained spheres (10 to 40 mm in diameter) which represented 'cold' or 'hot' lesions in PET measurements. The body contour-shaped phantom was equipped with a heart model containing two 'cold' lesions. Measurements were done with and without four external markers placed on the phantoms. The markers were made of plexiglass (2 cm diameter and 1 cm thickness) and contained a Ga-Ge-68 core for PET and Vitamin E for MRI measurements. Comparison of fusion results with and without markers was done visually and by computer assistance. This algorithm was applied to the different fusion parameters and phantoms. Results: Image fusion of PET and MRI data without external markers yielded a measured error of 0 resulting in a shift at the matrix border of 1.5 mm. Conclusion: The image fusion tool investigated allows a precise fusion of PET and MRI data with a translation error acceptable for clinical use. The error is further minimized by using external markers, especially in the case of missing anatomical orientation. Using PET the registration error depends almost only on the low resolution of the data

  14. Multi-Modality Imaging in the Evaluation and Treatment of Mitral Regurgitation.

    Science.gov (United States)

    Bouchard, Marc-André; Côté-Laroche, Claudia; Beaudoin, Jonathan

    2017-10-13

    Mitral regurgitation (MR) is frequent and associated with increased mortality and morbidity when severe. It may be caused by intrinsic valvular disease (primary MR) or ventricular deformation (secondary MR). Imaging has a critical role to document the severity, mechanism, and impact of MR on heart function as selected patients with MR may benefit from surgery whereas other will not. In patients planned for a surgical intervention, imaging is also important to select candidates for mitral valve (MV) repair over replacement and to predict surgical success. Although standard transthoracic echocardiography is the first-line modality to evaluate MR, newer imaging modalities like three-dimensional (3D) transesophageal echocardiography, stress echocardiography, cardiac magnetic resonance (CMR), and computed tomography (CT) are emerging and complementary tools for MR assessment. While some of these modalities can provide insight into MR severity, others will help to determine its mechanism. Understanding the advantages and limitations of each imaging modality is important to appreciate their respective role for MR assessment and help to resolve eventual discrepancies between different diagnostic methods. With the increasing use of transcatheter mitral procedures (repair or replacement) for high-surgical-risk patients, multimodality imaging has now become even more important to determine eligibility, preinterventional planning, and periprocedural guidance.

  15. Imaging of oxygenation in 3D tissue models with multi-modal phosphorescent probes

    Science.gov (United States)

    Papkovsky, Dmitri B.; Dmitriev, Ruslan I.; Borisov, Sergei

    2015-03-01

    Cell-penetrating phosphorescence based probes allow real-time, high-resolution imaging of O2 concentration in respiring cells and 3D tissue models. We have developed a panel of such probes, small molecule and nanoparticle structures, which have different spectral characteristics, cell penetrating and tissue staining behavior. The probes are compatible with conventional live cell imaging platforms and can be used in different detection modalities, including ratiometric intensity and PLIM (Phosphorescence Lifetime IMaging) under one- or two-photon excitation. Analytical performance of these probes and utility of the O2 imaging method have been demonstrated with different types of samples: 2D cell cultures, multi-cellular spheroids from cancer cell lines and primary neurons, excised slices from mouse brain, colon and bladder tissue, and live animals. They are particularly useful for hypoxia research, ex-vivo studies of tissue physiology, cell metabolism, cancer, inflammation, and multiplexing with many conventional fluorophors and markers of cellular function.

  16. Computer-based image analysis in radiological diagnostics and image-guided therapy: 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    International Nuclear Information System (INIS)

    Beier, J.

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the software systems presented cover the majority of image processing applications necessary in radiology and were entirely developed, implemented and validated in the clinical routine of a university medical school. (orig.) [de

  17. SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Xinzheng

    2017-10-01

    Full Text Available In this paper, we present a Synthetic Aperture Radar (SAR image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR database,and they demonstrate the effectiveness of the proposed approach.

  18. A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.

    Science.gov (United States)

    Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua

    2015-12-01

    In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.

  19. Tissue identification with micro-magnetic resonance imaging in a caprine spinal fusion model

    NARCIS (Netherlands)

    Uffen, M.; Krijnen, M.; Hoogendoorn, R.; Strijkers, G.; Everts, V.; Wuisman, P.; Smit, T.

    2008-01-01

    Nonunion is a major complication of spinal interbody fusion. Currently X-ray and computed tomography (CT) are used for evaluating the spinal fusion process. However, both imaging modalities have limitations in judgment of the early stages of this fusion process, as they only visualize mineralized

  20. LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.

    Science.gov (United States)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2015-03-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation. (paper)

  2. An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework

    Directory of Open Access Journals (Sweden)

    Guanqiu Qi

    2017-10-01

    Full Text Available Image fusion is widely used in different areas and can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. Medical image fusion, as an important image fusion application, can extract the details of multiple images from different imaging modalities and combine them into an image that contains complete and non-redundant information for increasing the accuracy of medical diagnosis and assessment. The quality of the fused image directly affects medical diagnosis and assessment. However, existing solutions have some drawbacks in contrast, sharpness, brightness, blur and details. This paper proposes an integrated dictionary-learning and entropy-based medical image-fusion framework that consists of three steps. First, the input image information is decomposed into low-frequency and high-frequency components by using a Gaussian filter. Second, low-frequency components are fused by weighted average algorithm and high-frequency components are fused by the dictionary-learning based algorithm. In the dictionary-learning process of high-frequency components, an entropy-based algorithm is used for informative blocks selection. Third, the fused low-frequency and high-frequency components are combined to obtain the final fusion results. The results and analyses of comparative experiments demonstrate that the proposed medical image fusion framework has better performance than existing solutions.

  3. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    OpenAIRE

    Zhiqin Zhu; Guanqiu Qi; Yi Chai; Penghua Li

    2017-01-01

    In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different ...

  4. A framework of region-based dynamic image fusion

    Institute of Scientific and Technical Information of China (English)

    WANG Zhong-hua; QIN Zheng; LIU Yu

    2007-01-01

    A new framework of region-based dynamic image fusion is proposed. First, the technique of target detection is applied to dynamic images (image sequences) to segment images into different targets and background regions. Then different fusion rules are employed in different regions so that the target information is preserved as much as possible. In addition, steerable non-separable wavelet frame transform is used in the process of multi-resolution analysis, so the system achieves favorable characters of orientation and invariant shift. Compared with other image fusion methods, experimental results showed that the proposed method has better capabilities of target recognition and preserves clear background information.

  5. Research on fusion algorithm of polarization image in tetrolet domain

    Science.gov (United States)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  6. Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion

    Science.gov (United States)

    Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei

    2018-06-01

    Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.

  7. Automatic classification of early Parkinson's disease with multi-modal MR imaging.

    Directory of Open Access Journals (Sweden)

    Dan Long

    Full Text Available BACKGROUND: In recent years, neuroimaging has been increasingly used as an objective method for the diagnosis of Parkinson's disease (PD. Most previous studies were based on invasive imaging modalities or on a single modality which was not an ideal diagnostic tool. In this study, we developed a non-invasive technology intended for use in the diagnosis of early PD by integrating the advantages of various modals. MATERIALS AND METHODS: Nineteen early PD patients and twenty-seven normal volunteers participated in this study. For each subject, we collected resting-state functional magnetic resonance imaging (rsfMRI and structural images. For the rsfMRI images, we extracted the characteristics at three different levels: ALFF (amplitude of low-frequency fluctuations, ReHo (regional homogeneity and RFCS (regional functional connectivity strength. For the structural images, we extracted the volume characteristics from the gray matter (GM, the white matter (WM and the cerebrospinal fluid (CSF. A two-sample t-test was used for the feature selection, and then the remaining features were fused for classification. Finally a classifier for early PD patients and normal control subjects was identified from support vector machine training. The performance of the classifier was evaluated using the leave-one-out cross-validation method. RESULTS: Using the proposed methods to classify the data set, good results (accuracy  = 86.96%, sensitivity  = 78.95%, specificity  = 92.59% were obtained. CONCLUSIONS: This method demonstrates a promising diagnosis performance by the integration of information from a variety of imaging modalities, and it shows potential for improving the clinical diagnosis and treatment of PD.

  8. The role of multi modality imaging in selecting patients and guiding lead placement for the delivery of cardiac resynchronization therapy.

    Science.gov (United States)

    Behar, Jonathan M; Claridge, Simon; Jackson, Tom; Sieniewicz, Ben; Porter, Bradley; Webb, Jessica; Rajani, Ronak; Kapetanakis, Stamatis; Carr-White, Gerald; Rinaldi, Christopher A

    2017-02-01

    Cardiac resynchronization therapy (CRT) is an effective pacemaker delivered treatment for selected patients with heart failure with the target of restoring electro-mechanical synchrony. Imaging techniques using echocardiography have as yet failed to find a metric of dyssynchrony to predict CRT response. Current guidelines are thus unchanged in recommending prolonged QRS duration, severe systolic function and refractory heart failure symptoms as criteria for CRT implantation. Evolving strain imaging techniques in 3D echocardiography, cardiac MRI and CT may however, overcome limitations of older methods and yield more powerful CRT response predictors. Areas covered: In this review, we firstly discuss the use of multi modality cardiac imaging in the selection of patients for CRT implantation and predicting the response to CRT. Secondly we examine the clinical evidence on avoiding areas of myocardial scar, targeting areas of dyssynchrony and in doing so, achieving the optimal positioning of the left ventricular lead to deliver CRT. Finally, we present the latest clinical studies which are integrating both clinical and imaging data with X-rays during the implantation in order to improve the accuracy of LV lead placement. Expert commentary: Image integration and fusion of datasets with live X-Ray angiography to guide procedures in real time is now a reality for some implanting centers. Such hybrid facilities will enable users to interact with images, allowing measurement, annotation and manipulation with instantaneous visualization on the catheter laboratory monitor. Such advances will serve as an invaluable adjunct for implanting physicians to accurately deliver pacemaker leads into the optimal position to deliver CRT.

  9. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  10. Dual Channel Pulse Coupled Neural Network Algorithm for Fusion of Multimodality Brain Images with Quality Analysis

    Directory of Open Access Journals (Sweden)

    Kavitha SRINIVASAN

    2014-09-01

    Full Text Available Background: In the review of medical imaging techniques, an important fact that emerged is that radiologists and physicians still are in a need of high-resolution medical images with complementary information from different modalities to ensure efficient analysis. This requirement should have been sorted out using fusion techniques with the fused image being used in image-guided surgery, image-guided radiotherapy and non-invasive diagnosis. Aim: This paper focuses on Dual Channel Pulse Coupled Neural Network (PCNN Algorithm for fusion of multimodality brain images and the fused image is further analyzed using subjective (human perception and objective (statistical measures for the quality analysis. Material and Methods: The modalities used in fusion are CT, MRI with subtypes T1/T2/PD/GAD, PET and SPECT, since the information from each modality is complementary to one another. The objective measures selected for evaluation of fused image were: Information Entropy (IE - image quality, Mutual Information (MI – deviation in fused to the source images and Signal to Noise Ratio (SNR – noise level, for analysis. Eight sets of brain images with different modalities (T2 with T1, T2 with CT, PD with T2, PD with GAD, T2 with GAD, T2 with SPECT-Tc, T2 with SPECT-Ti, T2 with PET are chosen for experimental purpose and the proposed technique is compared with existing fusion methods such as the Average method, the Contrast pyramid, the Shift Invariant Discrete Wavelet Transform (SIDWT with Harr and the Morphological pyramid, using the selected measures to ascertain relative performance. Results: The IE value and SNR value of the fused image derived from dual channel PCNN is higher than other fusion methods, shows that the quality is better with less noise. Conclusion: The fused image resulting from the proposed method retains the contrast, shape and texture as in source images without false information or information loss.

  11. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    Science.gov (United States)

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  12. Research on Methods of Infrared and Color Image Fusion Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Zhao Rentao

    2014-06-01

    Full Text Available There is significant difference in the imaging features of infrared image and color image, but their fusion images also have very good complementary information. In this paper, based on the characteristics of infrared image and color image, first of all, wavelet transform is applied to the luminance component of the infrared image and color image. In multi resolution the relevant regional variance is regarded as the activity measure, relevant regional variance ratio as the matching measure, and the fusion image is enhanced in the process of integration, thus getting the fused images by final synthesis module and multi-resolution inverse transform. The experimental results show that the fusion image obtained by the method proposed in this paper is better than the other methods in keeping the useful information of the original infrared image and the color information of the original color image. In addition, the fusion image has stronger adaptability and better visual effect.

  13. Fusion of Images from Dissimilar Sensor Systems

    National Research Council Canada - National Science Library

    Chow, Khin

    2004-01-01

    Different sensors exploit different regions of the electromagnetic spectrum; therefore a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit...

  14. A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Arvind Selwal

    2016-09-01

    Full Text Available Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric modalities of the user and protect the raw images. To minimize spoof attacks on biometric systems by unauthorised users one of the solutions is to use multi-biometric systems. Multi-modal biometric system works by using fusion technique to merge feature templates generated from different modalities of the human. In this work a new scheme is proposed to secure template during feature fusion level. Scheme is based on union operation of fuzzy relations of templates of modalities during fusion process of multimodal biometric systems. This approach serves dual purpose of feature fusion as well as transformation of templates into a single secured non invertible template. The proposed technique is cancelable and experimentally tested on a bimodal biometric system comprising of fingerprint and hand geometry. Developed scheme removes the problem of an attacker learning the original minutia position in fingerprint and various measurements of hand geometry. Given scheme provides improved performance of the system with reduction in false accept rate and improvement in genuine accept rate.

  15. Fusion of infrared and visible images based on BEMD and NSDFB

    Science.gov (United States)

    Zhu, Pan; Huang, Zhanhua; Lei, Hai

    2016-07-01

    This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.

  16. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    Science.gov (United States)

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  17. A hierarchical structure approach to MultiSensor Information Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. (Tennessee Univ., Tullahoma, TN (United States). Space Inst.); Pap, R.M.; Harston, C.T. (Accurate Automation Corp., Chattanooga, TN (United States))

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  18. A hierarchical structure approach to MultiSensor Information Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. [Tennessee Univ., Tullahoma, TN (United States). Space Inst.; Pap, R.M.; Harston, C.T. [Accurate Automation Corp., Chattanooga, TN (United States)

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  19. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    Directory of Open Access Journals (Sweden)

    Chenhui Qiu

    2017-01-01

    Full Text Available Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR- based approach. And the dynamic group sparsity recovery (DGSR algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  20. The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    Directory of Open Access Journals (Sweden)

    Wojtek James eGoscinski

    2014-03-01

    Full Text Available The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE is a national imaging and visualisation facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organisation (CSIRO, and the Victorian Partnership for Advanced Computing (VPAC, with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI, x-ray computer tomography (CT, electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i integrated multiple different neuroimaging analysis software components, (ii enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  1. Multi-target molecular imaging and its progress in research and application

    International Nuclear Information System (INIS)

    Tang Ganghua

    2011-01-01

    Multi-target molecular imaging (MMI) is an important field of research in molecular imaging. It includes multi-tracer multi-target molecular imaging(MTMI), fusion-molecule multi-target imaging (FMMI), coupling-molecule multi-target imaging (CMMI), and multi-target multifunctional molecular imaging(MMMI). In this paper,imaging modes of MMI are reviewed, and potential applications of positron emission tomography MMI in near future are discussed. (author)

  2. Performance study of a fan beam collimator designed for a multi-modality small animal imaging device

    International Nuclear Information System (INIS)

    Sabbir Ahmed, ASM; Kramer, Gary H.; Semmler, Wolfrad; Peter, Jorg

    2011-01-01

    This paper describes the methodology to design and conduct the performances of a fan beam collimator. This fan beam collimator was designed to use with a multi-modality small animal imaging device and the performance of the collimator was studied for a 3D geometry. Analytical expressions were formulated to calculate the parameters for the collimator. A Monte Carlo model was developed to analyze the scattering and image noises for a 3D object. The results showed that the performance of the fan beam collimator was strongly dependent on the source distribution and position. The fan beam collimator showed increased counting efficiency in comparison to a parallel hole collimator. Inside attenuating medium, the increased attenuating effect outweighed the fan beam increased counting efficiency.

  3. Development of positron sensor for multi-modal endoscopy

    Energy Technology Data Exchange (ETDEWEB)

    Shimazoe, Kenji, E-mail: shimazoe@it-club.jp [Department of Bioengineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Takahashi, Hiroyuki [Department of Nuclear Engineering and Management, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Fujita, Kaoru [Japan Atomic Energy Agency, 4-29 Tokaimura, 319-1184 Ibaraki (Japan); Mori, Hiroshi; Momose, Toshimitsu [Department of Bioengineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2011-08-21

    Endoscopy is an important inspection device to detect cancers in the human body, but there exists the case of cancer that is hard to detect with only an optical device. Double inspection with optical and radio images is preferable for high accuracy diagnosis, and real time radio imaging is also promising for real time surgery with an endoscope. We have simulated, designed and fabricated a Si-based positron imaging probe for more accurate cancer detection in multi-modality endoscope systems. The fabricated Si-based detector with 2 mm diameter and 1 mm thickness was tested with gamma and positron sources, and also tested to detect cancers in a tumor bearing mouse. The direct positron imaging could have an advantage over gamma imaging in its high sensitivity and resolution.

  4. Multi-Modal Detection and Mapping of Static and Dynamic Obstacles in Agriculture for Process Evaluation

    Directory of Open Access Journals (Sweden)

    Timo Korthals

    2018-03-01

    Full Text Available Today, agricultural vehicles are available that can automatically perform tasks such as weed detection and spraying, mowing, and sowing while being steered automatically. However, for such systems to be fully autonomous and self-driven, not only their specific agricultural tasks must be automated. An accurate and robust perception system automatically detecting and avoiding all obstacles must also be realized to ensure safety of humans, animals, and other surroundings. In this paper, we present a multi-modal obstacle and environment detection and recognition approach for process evaluation in agricultural fields. The proposed pipeline detects and maps static and dynamic obstacles globally, while providing process-relevant information along the traversed trajectory. Detection algorithms are introduced for a variety of sensor technologies, including range sensors (lidar and radar and cameras (stereo and thermal. Detection information is mapped globally into semantical occupancy grid maps and fused across all sensors with late fusion, resulting in accurate traversability assessment and semantical mapping of process-relevant categories (e.g., crop, ground, and obstacles. Finally, a decoding step uses a Hidden Markov model to extract relevant process-specific parameters along the trajectory of the vehicle, thus informing a potential control system of unexpected structures in the planned path. The method is evaluated on a public dataset for multi-modal obstacle detection in agricultural fields. Results show that a combination of multiple sensor modalities increases detection performance and that different fusion strategies must be applied between algorithms detecting similar and dissimilar classes.

  5. FWFusion: Fuzzy Whale Fusion model for MRI multimodal image ...

    Indian Academy of Sciences (India)

    Hanmant Venketrao Patil

    2018-03-14

    Mar 14, 2018 ... consider multi-modality medical images other than PET and MRI images. ... cipal component averaging based on DWT for fusing CT-. MRI and MRI ..... sub-band LH of the fused image, the distance measure is given based on the ...... sustainable integrated dynamic ship routing and scheduling optimization.

  6. Multi-modal locomotion: from animal to application

    International Nuclear Information System (INIS)

    Lock, R J; Burgess, S C; Vaidyanathan, R

    2014-01-01

    The majority of robotic vehicles that can be found today are bound to operations within a single media (i.e. land, air or water). This is very rarely the case when considering locomotive capabilities in natural systems. Utility for small robots often reflects the exact same problem domain as small animals, hence providing numerous avenues for biological inspiration. This paper begins to investigate the various modes of locomotion adopted by different genus groups in multiple media as an initial attempt to determine the compromise in ability adopted by the animals when achieving multi-modal locomotion. A review of current biologically inspired multi-modal robots is also presented. The primary aim of this research is to lay the foundation for a generation of vehicles capable of multi-modal locomotion, allowing ambulatory abilities in more than one media, surpassing current capabilities. By identifying and understanding when natural systems use specific locomotion mechanisms, when they opt for disparate mechanisms for each mode of locomotion rather than using a synergized singular mechanism, and how this affects their capability in each medium, similar combinations can be used as inspiration for future multi-modal biologically inspired robotic platforms. (topical review)

  7. Assessment of rigid multi-modality image registration consistency using the multiple sub-volume registration (MSR) method

    International Nuclear Information System (INIS)

    Ceylan, C; Heide, U A van der; Bol, G H; Lagendijk, J J W; Kotte, A N T J

    2005-01-01

    Registration of different imaging modalities such as CT, MRI, functional MRI (fMRI), positron (PET) and single photon (SPECT) emission tomography is used in many clinical applications. Determining the quality of any automatic registration procedure has been a challenging part because no gold standard is available to evaluate the registration. In this note we present a method, called the 'multiple sub-volume registration' (MSR) method, for assessing the consistency of a rigid registration. This is done by registering sub-images of one data set on the other data set, performing a crude non-rigid registration. By analysing the deviations (local deformations) of the sub-volume registrations from the full registration we get a measure of the consistency of the rigid registration. Registration of 15 data sets which include CT, MR and PET images for brain, head and neck, cervix, prostate and lung was performed utilizing a rigid body registration with normalized mutual information as the similarity measure. The resulting registrations were classified as good or bad by visual inspection. The resulting registrations were also classified using our MSR method. The results of our MSR method agree with the classification obtained from visual inspection for all cases (p < 0.02 based on ANOVA of the good and bad groups). The proposed method is independent of the registration algorithm and similarity measure. It can be used for multi-modality image data sets and different anatomic sites of the patient. (note)

  8. Diagnostic role of (99)Tc(m)-MDP SPECT/CT combined SPECT/MRI Multi modality imaging for early and atypical bone metastases.

    Science.gov (United States)

    Chen, Xiao-Liang; Li, Qian; Cao, Lin; Jiang, Shi-Xi

    2014-01-01

    The bone metastasis appeared early before the bone imaging for most of the above patients. (99)Tc(m)-MDP ((99)Tc(m) marked methylene diphosphonate) bone imaging could diagnosis the bone metastasis with highly sensitivity, but with lower specificity. The aim of this study is to explore the diagnostic value of (99)Tc(m)-MDP SPECT/CT combined SPECT/MRI Multi modality imaging for the early period atypical bone metastases. 15 to 30 mCi (99)Tc(m)-MDP was intravenously injected to the 34 malignant patients diagnosed as doubtful early bone metastases. SPECT, CT and SPECT/CT images were captured and analyzed consequently. For the patients diagnosed as early period atypical bone metastases by SPECT/CT, combining the SPECT/CT and MRI together as the SPECT/MRI integrated image. The obtained SPECT/MRI image was analyzed and compared with the pathogenic results of patients. The results indicated that 34 early period doubtful metastatic focus, including 34 SPECT positive focus, 17 focus without special changes by using CT method, 11 bone metastases focus by using SPECT/CT method, 23 doubtful bone metastases focus, 8 doubtful bone metastases focus, 14 doubtful bone metastases focus and 2 focus without clear image. Totally, SPECT/CT combined with SPECT/MRI method diagnosed 30 bone metastatic focus and 4 doubtfully metastatic focus. In conclusion, (99)Tc(m)-MDP SPECT/CT combined SPECT/MRI Multi modality imaging shows a higher diagnostic value for the early period bone metastases, which also enhances the diagnostic accuracy rate.

  9. Multi-modal image registration: matching MRI with histology

    NARCIS (Netherlands)

    Alić, L.; Haeck, J.C.; Klein, S.; Bol, K.; Tiel, S.T. van; Wielopolski, P.A.; Bijster, M.; Niessen, W.J.; Bernsen, M.; Veenland, J.F.; Jong, M. de

    2010-01-01

    Spatial correspondence between histology and multi sequence MRI can provide information about the capabilities of non-invasive imaging to characterize cancerous tissue. However, shrinkage and deformation occurring during the excision of the tumor and the histological processing complicate the co

  10. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  11. Non-local statistical label fusion for multi-atlas segmentation.

    Science.gov (United States)

    Asman, Andrew J; Landman, Bennett A

    2013-02-01

    Multi-atlas segmentation provides a general purpose, fully-automated approach for transferring spatial information from an existing dataset ("atlases") to a previously unseen context ("target") through image registration. The method to resolve voxelwise label conflicts between the registered atlases ("label fusion") has a substantial impact on segmentation quality. Ideally, statistical fusion algorithms (e.g., STAPLE) would result in accurate segmentations as they provide a framework to elegantly integrate models of rater performance. The accuracy of statistical fusion hinges upon accurately modeling the underlying process of how raters err. Despite success on human raters, current approaches inaccurately model multi-atlas behavior as they fail to seamlessly incorporate exogenous intensity information into the estimation process. As a result, locally weighted voting algorithms represent the de facto standard fusion approach in clinical applications. Moreover, regardless of the approach, fusion algorithms are generally dependent upon large atlas sets and highly accurate registration as they implicitly assume that the registered atlases form a collectively unbiased representation of the target. Herein, we propose a novel statistical fusion algorithm, Non-Local STAPLE (NLS). NLS reformulates the STAPLE framework from a non-local means perspective in order to learn what label an atlas would have observed, given perfect correspondence. Through this reformulation, NLS (1) seamlessly integrates intensity into the estimation process, (2) provides a theoretically consistent model of multi-atlas observation error, and (3) largely diminishes the need for large atlas sets and very high-quality registrations. We assess the sensitivity and optimality of the approach and demonstrate significant improvement in two empirical multi-atlas experiments. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Medical images fusion for application in treatment planning systems in radiotherapy

    International Nuclear Information System (INIS)

    Ros, Renato Assenci

    2006-01-01

    Software for medical images fusion was developed for utilization in CAT3D radiotherapy and MNPS radiosurgery treatment planning systems. A mutual information maximization methodology was used to make the image registration of different modalities by measure of the statistical dependence between the voxels pairs. The alignment by references points makes an initial approximation to the non linear optimization process by downhill simplex method for estimation of the joint histogram. The coordinates transformation function use a trilinear interpolation and search for the global maximum value in a 6 dimensional space, with 3 degree of freedom for translation and 3 degree of freedom for rotation, by making use of the rigid body model. This method was evaluated with CT, MR and PET images from Vanderbilt University database to verify its accuracy by comparison of transformation coordinates of each images fusion with gold-standard values. The median of images alignment error values was 1.6 mm for CT-MR fusion and 3.5 mm for PET-MR fusion, with gold-standard accuracy estimated as 0.4 mm for CT-MR fusion and 1.7 mm for PET-MR fusion. The maximum error values were 5.3 mm for CT-MR fusion and 7.4 mm for PET-MR fusion, and 99.1% of alignment errors were images subvoxels values. The mean computing time was 24 s. The software was successfully finished and implemented in 59 radiotherapy routine services, of which 42 are in Brazil and 17 are in Latin America. This method does not have limitation about different resolutions from images, pixels sizes and slice thickness. Besides, the alignment may be accomplished by axial, coronal or sagittal images. (author)

  13. [Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].

    Science.gov (United States)

    Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T

    2003-10-01

    Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.

  14. Integration of vibro-acoustography imaging modality with the traditional mammography.

    Science.gov (United States)

    Hosseini, H Gholam; Alizad, A; Fatemi, M

    2007-01-01

    Vibro-acoustography (VA) is a new imaging modality that has been applied to both medical and industrial imaging. Integrating unique diagnostic information of VA with other medical imaging is one of our research interests. In this work, we establish correspondence between the VA images and traditional X-ray mammogram by adopting a flexible control-point selection technique for image registration. A modified second-order polynomial, which simply leads to a scale/rotation/translation invariant registration, was used. The results of registration were used to spatially transform the breast VA images to map with the X-ray mammography with a registration error of less than 1.65 mm. The fused image is defined as a linear integration of the VA and X-ray images. Moreover, a color-based fusion technique was employed to integrate the images for better visualization of structural information.

  15. Three dimensional image alignment, registration and fusion

    International Nuclear Information System (INIS)

    Treves, S.T.; Mitchell, K.D.; Habboush, I.H.

    1998-01-01

    Combined assessment of three dimensional anatomical and functional images (SPECT, PET, MRI, CT) is useful to determine the nature and extent of lesions in many parts of the body. Physicians principally rely on their spatial sense of mentally re-orient and overlap images obtained with different imaging modalities. Objective methods that enable easy and intuitive image registration can help the physician arrive at more optimal diagnoses and better treatment decisions. This review describes a simple, intuitive and robust image registration approach developed in our laboratory. It differs from most other registration techniques in that it allows the user to incorporate all of the available information within the images in the registration process. This method takes full advantage of the ability of knowledgeable operators to achieve image registration and fusion using an intuitive interactive visual approach. It can register images accurately and quickly without the use of elaborate mathematical modeling or optimization techniques. The method provides the operator with tools to manipulate images in three dimensions, including visual feedback techniques to assess the accuracy of registration (grids, overlays, masks, and fusion of images in different colors). Its application is not limited to brain imaging and can be applied to images from any region in the body. The overall effect is a registration algorithm that is easy to implement and can achieve accuracy on the order of one pixel

  16. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    Science.gov (United States)

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  18. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  19. Prospective, longitudinal, multi-modal functional imaging for radical chemo-IMRT treatment of locally advanced head and neck cancer: the INSIGHT study

    International Nuclear Information System (INIS)

    Welsh, Liam; Panek, Rafal; McQuaid, Dualta; Dunlop, Alex; Schmidt, Maria; Riddell, Angela; Koh, Dow-Mu; Doran, Simon; Murray, Iain; Du, Yong; Chua, Sue; Hansen, Vibeke; Wong, Kee H.; Dean, Jamie; Gulliford, Sarah; Bhide, Shreerang; Leach, Martin O.; Nutting, Christopher; Harrington, Kevin; Newbold, Kate

    2015-01-01

    Radical chemo-radiotherapy (CRT) is an effective organ-sparing treatment option for patients with locally advanced head and neck cancer (LAHNC). Despite advances in treatment for LAHNC, a significant minority of these patients continue to fail to achieve complete response with standard CRT. By constructing a multi-modality functional imaging (FI) predictive biomarker for CRT outcome for patients with LAHNC we hope to be able to reliably identify those patients at high risk of failing standard CRT. Such a biomarker would in future enable CRT to be tailored to the specific biological characteristics of each patients’ tumour, potentially leading to improved treatment outcomes. The INSIGHT study is a single-centre, prospective, longitudinal multi-modality imaging study using functional MRI and FDG-PET/CT for patients with LAHNC squamous cell carcinomas receiving radical CRT. Two cohorts of patients are being recruited: one treated with, and another treated without, induction chemotherapy. All patients receive radical intensity modulated radiotherapy with concurrent chemotherapy. Patients undergo functional imaging before, during and 3 months after completion of radiotherapy, as well as at the time of relapse, should that occur within the first two years after treatment. Serum samples are collected from patients at the same time points as the FI scans for analysis of a panel of serum markers of tumour hypoxia. The primary aim of the INSIGHT study is to acquire a prospective multi-parametric longitudinal data set comprising functional MRI, FDG PET/CT, and serum biomarker data from patients with LAHNC undergoing primary radical CRT. This data set will be used to construct a predictive imaging biomarker for outcome after CRT for LAHNC. This predictive imaging biomarker will be used in future studies of functional imaging based treatment stratification for patients with LAHNC. Additional objectives are: defining the reproducibility of FI parameters; determining robust

  20. Prussian blue nanocubes: multi-functional nanoparticles for multimodal imaging and image-guided therapy (Conference Presentation)

    Science.gov (United States)

    Cook, Jason R.; Dumani, Diego S.; Kubelick, Kelsey P.; Luci, Jeffrey; Emelianov, Stanislav Y.

    2017-03-01

    Imaging modalities utilize contrast agents to improve morphological visualization and to assess functional and molecular/cellular information. Here we present a new type of nanometer scale multi-functional particle that can be used for multi-modal imaging and therapeutic applications. Specifically, we synthesized monodisperse 20 nm Prussian Blue Nanocubes (PBNCs) with desired optical absorption in the near-infrared region and superparamagnetic properties. PBNCs showed excellent contrast in photoacoustic (700 nm wavelength) and MR (3T) imaging. Furthermore, photostability was assessed by exposing the PBNCs to nearly 1,000 laser pulses (5 ns pulse width) with up to 30 mJ/cm2 laser fluences. The PBNCs exhibited insignificant changes in photoacoustic signal, demonstrating enhanced robustness compared to the commonly used gold nanorods (substantial photodegradation with fluences greater than 5 mJ/cm2). Furthermore, the PBNCs exhibited superparamagnetism with a magnetic saturation of 105 emu/g, a 5x improvement over superparamagnetic iron-oxide (SPIO) nanoparticles. PBNCs exhibited enhanced T2 contrast measured using 3T clinical MRI. Because of the excellent optical absorption and magnetism, PBNCs have potential uses in other imaging modalities including optical tomography, microscopy, magneto-motive OCT/ultrasound, etc. In addition to multi-modal imaging, the PBNCs are multi-functional and, for example, can be used to enhance magnetic delivery and as therapeutic agents. Our initial studies show that stem cells can be labeled with PBNCs to perform image-guided magnetic delivery. Overall, PBNCs can act as imaging/therapeutic agents in diverse applications including cancer, cardiovascular disease, ophthalmology, and tissue engineering. Furthermore, PBNCs are based on FDA approved Prussian Blue thus potentially easing clinical translation of PBNCs.

  1. A digital 3D atlas of the marmoset brain based on multi-modal MRI.

    Science.gov (United States)

    Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C

    2018-04-01

    The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.

  2. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    Science.gov (United States)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  3. Superparamagnetic iron oxide nanoparticles function as a long-term, multi-modal imaging label for non-invasive tracking of implanted progenitor cells.

    Directory of Open Access Journals (Sweden)

    Christina A Pacak

    Full Text Available The purpose of this study was to determine the ability of superparamagnetic iron oxide (SPIO nanoparticles to function as a long-term tracking label for multi-modal imaging of implanted engineered tissues containing muscle-derived progenitor cells using magnetic resonance imaging (MRI and X-ray micro-computed tomography (μCT. SPIO-labeled primary myoblasts were embedded in fibrin sealant and imaged to obtain intensity data by MRI or radio-opacity information by μCT. Each imaging modality displayed a detection gradient that matched increasing SPIO concentrations. Labeled cells were then incorporated in fibrin sealant, injected into the atrioventricular groove of rat hearts, and imaged in vivo and ex vivo for up to 1 year. Transplanted cells were identified in intact animals and isolated hearts using both imaging modalities. MRI was better able to detect minuscule amounts of SPIO nanoparticles, while μCT more precisely identified the location of heavily-labeled cells. Histological analyses confirmed that iron oxide particles were confined to viable, skeletal muscle-derived cells in the implant at the expected location based on MRI and μCT. These analyses showed no evidence of phagocytosis of labeled cells by macrophages or release of nanoparticles from transplanted cells. In conclusion, we established that SPIO nanoparticles function as a sensitive and specific long-term label for MRI and μCT, respectively. Our findings will enable investigators interested in regenerative therapies to non-invasively and serially acquire complementary, high-resolution images of transplanted cells for one year using a single label.

  4. Ultrasound-guided image fusion with computed tomography and magnetic resonance imaging. Clinical utility for imaging and interventional diagnostics of hepatic lesions

    International Nuclear Information System (INIS)

    Clevert, D.A.; Helck, A.; Paprottka, P.M.; Trumm, C.; Reiser, M.F.; Zengel, P.

    2012-01-01

    Abdominal ultrasound is often the first-line imaging modality for assessing focal liver lesions. Due to various new ultrasound techniques, such as image fusion, global positioning system (GPS) tracking and needle tracking guided biopsy, abdominal ultrasound now has great potential regarding detection, characterization and treatment of focal liver lesions. Furthermore, these new techniques will help to improve the clinical management of patients before and during interventional procedures. This article presents the principle and clinical impact of recently developed techniques in the field of ultrasound, e.g. image fusion, GPS tracking and needle tracking guided biopsy and discusses the results based on a feasibility study on 20 patients with focal hepatic lesions. (orig.) [de

  5. Identification, diagnostic assistance and planning methods that use multi-modality imaging for prostate cancer focal therapies

    International Nuclear Information System (INIS)

    Makni, Nasr

    2010-01-01

    Prostate cancer is one of the leading causes of death from cancer among men. In In the last decade, new diagnosis procedures and treatment options have been developed and made possible thanks to the recent progress in prostate imaging modalities. The newest challenges in this field are to detect the smallest tumors and to treat locally to minimise the treatment morbidity. In this thesis, we introduce a set of automatic image processing methods for the guidance and assistance of diagnosis and treatment, in laser-based prostate cancer focal therapies. In the first part of this work, segmentation and computer-aided detection algorithms have been developed for the enhancement of image-based diagnosis of prostate cancer. First, we propose a novel approach that combines Markov Random Fields framework with an Active Shape Model, in order to extract three dimensional outlines of the gland from Magnetic Resonance Imaging (MRI) data. Second, prostate's MRI volume is segmented into peripheral and central zones: we introduce a method that explores features of multispectral MRI, and is based on belief functions and the modelling of an a priori knowledge as an additional source of information. Finally, computer-aided detection of prostate's peripheral zone tumors is investigated by experimenting novel texture features based on fractal geometry-based. These parameters, extracted from morphological MRI, were tested using both supervised and unsupervised classification methods. The results of these different approaches were studied and compared. The second part of this work addresses the guidance of laser-based focal ablation of prostate tumors. A novel non rigid registration method is introduced for fusion of pre-operative MRI and planning data, and per-operative ultrasound imaging. We test and evaluate our algorithms using simulated data and physical phantoms, which enable comparison to ground truth. Patients' data, combined to expert interpretation, are also used in the

  6. A Hybrid FPGA/Coarse Parallel Processing Architecture for Multi-modal Visual Feature Descriptors

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Alonso, Javier Díaz

    2008-01-01

    This paper describes the hybrid architecture developed for speeding up the processing of so-called multi-modal visual primitives which are sparse image descriptors extracted along contours. In the system, the first stages of visual processing are implemented on FPGAs due to their highly parallel...

  7. A novel technique to incorporate structural prior information into multi-modal tomographic reconstruction

    International Nuclear Information System (INIS)

    Kazantsev, Daniil; Dobson, Katherine J; Withers, Philip J; Lee, Peter D; Ourselin, Sébastien; Arridge, Simon R; Hutton, Brian F; Kaestner, Anders P; Lionheart, William R B

    2014-01-01

    There has been a rapid expansion of multi-modal imaging techniques in tomography. In biomedical imaging, patients are now regularly imaged using both single photon emission computed tomography (SPECT) and x-ray computed tomography (CT), or using both positron emission tomography and magnetic resonance imaging (MRI). In non-destructive testing of materials both neutron CT (NCT) and x-ray CT are widely applied to investigate the inner structure of material or track the dynamics of physical processes. The potential benefits from combining modalities has led to increased interest in iterative reconstruction algorithms that can utilize the data from more than one imaging mode simultaneously. We present a new regularization term in iterative reconstruction that enables information from one imaging modality to be used as a structural prior to improve resolution of the second modality. The regularization term is based on a modified anisotropic tensor diffusion filter, that has shape-adapted smoothing properties. By considering the underlying orientations of normal and tangential vector fields for two co-registered images, the diffusion flux is rotated and scaled adaptively to image features. The images can have different greyscale values and different spatial resolutions. The proposed approach is particularly good at isolating oriented features in images which are important for medical and materials science applications. By enhancing the edges it enables both easy identification and volume fraction measurements aiding segmentation algorithms used for quantification. The approach is tested on a standard denoising and deblurring image recovery problem, and then applied to 2D and 3D reconstruction problems; thereby highlighting the capabilities of the algorithm. Using synthetic data from SPECT co-registered with MRI, and real NCT data co-registered with x-ray CT, we show how the method can be used across a range of imaging modalities. (paper)

  8. Multi-criteria appraisal of multi-modal urban public transport systems

    NARCIS (Netherlands)

    Keyvan Ekbatani, M.; Cats, O.

    2015-01-01

    This study proposes a multi-criteria decision making (MCDM) modelling framework for the appraisal of multi-modal urban public transportation services. MCDM is commonly used to obtain choice alternatives that satisfy a range of performance indicators. The framework embraces both compensatory and

  9. Multi-modality imaging review of congenital abnormalities of kidney and upper urinary tract.

    Science.gov (United States)

    Ramanathan, Subramaniyan; Kumar, Devendra; Khanna, Maneesh; Al Heidous, Mahmoud; Sheikh, Adnan; Virmani, Vivek; Palaniappan, Yegu

    2016-02-28

    Congenital abnormalities of the kidney and urinary tract (CAKUT) include a wide range of abnormalities ranging from asymptomatic ectopic kidneys to life threatening renal agenesis (bilateral). Many of them are detected in the antenatal or immediate postnatal with a significant proportion identified in the adult population with varying degree of severity. CAKUT can be classified on embryological basis in to abnormalities in the renal parenchymal development, aberrant embryonic migration and abnormalities of the collecting system. Renal parenchymal abnormalities include multi cystic dysplastic kidneys, renal hypoplasia, number (agenesis or supernumerary), shape and cystic renal diseases. Aberrant embryonic migration encompasses abnormal location and fusion anomalies. Collecting system abnormalities include duplex kidneys and Pelvi ureteric junction obstruction. Ultrasonography (US) is typically the first imaging performed as it is easily available, non-invasive and radiation free used both antenatally and postnatally. Computed tomography (CT) and magnetic resonance imaging (MRI) are useful to confirm the ultrasound detected abnormality, detection of complex malformations, demonstration of collecting system and vascular anatomy and more importantly for early detection of complications like renal calculi, infection and malignancies. As CAKUT are one of the leading causes of end stage renal disease, it is important for the radiologists to be familiar with the varying imaging appearances of CAKUT on US, CT and MRI, thereby helping in prompt diagnosis and optimal management.

  10. MINERVA: A multi-modality plug-in-based radiation therapy treatment planning system

    International Nuclear Information System (INIS)

    Wemple, C. A.; Wessol, D. E.; Nigg, D. W.; Cogliati, J. J.; Milvich, M.; Fredrickson, C. M.; Perkins, M.; Harkin, G. J.; Hartmann-Siantar, C. L.; Lehmann, J.; Flickinger, T.; Pletcher, D.; Yuan, A.; DeNardo, G. L.

    2005-01-01

    Researchers at the INEEL, MSU, LLNL and UCD have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system, which can be used for planning and analysing several radiotherapy modalities, either singly or combined, using common treatment planning tools. It employs an integrated, lightweight plug-in architecture to accommodate multi-modal treatment planning using standard interface components. The design also facilitates the future integration of improved planning technologies. The code is being developed with the Java programming language for inter-operability. The MINERVA design includes the image processing, model definition and data analysis modules with a central module to coordinate communication and data transfer. Dose calculation is performed by source and transport plug-in modules, which communicate either directly through the database or through MINERVA's openly published, extensible markup language (XML)-based application programmer's interface (API). All internal data are managed by a database management system and can be exported to other applications or new installations through the API data formats. A full computation path has been established for molecular-targeted radiotherapy treatment planning, with additional treatment modalities presently under development. (authors)

  11. Visualization of graphical information fusion results

    Science.gov (United States)

    Blasch, Erik; Levchuk, Georgiy; Staskevich, Gennady; Burke, Dustin; Aved, Alex

    2014-06-01

    Graphical fusion methods are popular to describe distributed sensor applications such as target tracking and pattern recognition. Additional graphical methods include network analysis for social, communications, and sensor management. With the growing availability of various data modalities, graphical fusion methods are widely used to combine data from multiple sensors and modalities. To better understand the usefulness of graph fusion approaches, we address visualization to increase user comprehension of multi-modal data. The paper demonstrates a use case that combines graphs from text reports and target tracks to associate events and activities of interest visualization for testing Measures of Performance (MOP) and Measures of Effectiveness (MOE). The analysis includes the presentation of the separate graphs and then graph-fusion visualization for linking network graphs for tracking and classification.

  12. Improved medical image modality classification using a combination of visual and textual features.

    Science.gov (United States)

    Dimitrovski, Ivica; Kocev, Dragi; Kitanovski, Ivan; Loskovska, Suzana; Džeroski, Sašo

    2015-01-01

    In this paper, we present the approach that we applied to the medical modality classification tasks at the ImageCLEF evaluation forum. More specifically, we used the modality classification databases from the ImageCLEF competitions in 2011, 2012 and 2013, described by four visual and one textual types of features, and combinations thereof. We used local binary patterns, color and edge directivity descriptors, fuzzy color and texture histogram and scale-invariant feature transform (and its variant opponentSIFT) as visual features and the standard bag-of-words textual representation coupled with TF-IDF weighting. The results from the extensive experimental evaluation identify the SIFT and opponentSIFT features as the best performing features for modality classification. Next, the low-level fusion of the visual features improves the predictive performance of the classifiers. This is because the different features are able to capture different aspects of an image, their combination offering a more complete representation of the visual content in an image. Moreover, adding textual features further increases the predictive performance. Finally, the results obtained with our approach are the best results reported on these databases so far. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Fusion of Geophysical Images in the Study of Archaeological Sites

    Science.gov (United States)

    Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.

    2011-12-01

    This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image

  14. Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.

    Science.gov (United States)

    Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn

    2016-04-20

    Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.

  15. Integrated visualization of multi-angle bioluminescence imaging and micro CT

    NARCIS (Netherlands)

    Kok, P.; Dijkstra, J.; Botha, C.P.; Post, F.H.; Kaijzel, E.; Que, I.; Löwik, C.W.G.M.; Reiber, J.H.C.; Lelieveldt, B.P.F.

    2007-01-01

    This paper explores new methods to visualize and fuse multi-2D bioluminescence imaging (BLI) data with structural imaging modalities such as micro CT and MR. A geometric, back-projection-based 3D reconstruction for superficial lesions from multi-2D BLI data is presented, enabling a coarse estimate

  16. Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

    International Nuclear Information System (INIS)

    Peng, Matthew Jian-qiao; Ju Xiangyang; Khambay, Balvinder S.; Ayoub, Ashraf F.; Chen, Chin-Tu; Bai Bo

    2012-01-01

    Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18 F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.

  17. Microwave tomography of extremities: 2. Functional fused imaging of flow reduction and simulated compartment syndrome

    International Nuclear Information System (INIS)

    Semenov, Serguei; Nair, Bindu; Kellam, James; Williams, Thomas; Quinn, Michael; Sizov, Yuri; Nazarov, Alexei; Pavlovsky, Andrey

    2011-01-01

    Medical imaging has recently expanded into the dual- or multi-modality fusion of anatomical and functional imaging modalities. This significantly improves the diagnostic power while simultaneously increasing the cost of already expensive medical devices or investigations and decreasing their mobility. We are introducing a novel imaging concept of four-dimensional (4D) microwave tomographic (MWT) functional imaging: three dimensional (3D) in the spatial domain plus one dimensional (1D) in the time, functional dynamic domain. Instead of a fusion of images obtained by different imaging modalities, 4D MWT fuses absolute anatomical images with dynamic, differential images of the same imaging technology. The approach was successively validated in animal experiments with short-term arterial flow reduction and a simulated compartment syndrome in an initial simplified experimental setting using a dedicated MWT system. The presented fused images are not perfect as MWT is a novel imaging modality at its early stage of the development and ways of reading reconstructed MWT images need to be further studied and understood. However, the reconstructed fused images present clear evidence that microwave tomography is an emerging imaging modality with great potentials for functional imaging.

  18. MIDA - Optimizing control room performance through multi-modal design

    International Nuclear Information System (INIS)

    Ronan, A. M.

    2006-01-01

    Multi-modal interfaces can support the integration of humans with information processing systems and computational devices to maximize the unique qualities that comprise a complex system. In a dynamic environment, such as a nuclear power plant control room, multi-modal interfaces, if designed correctly, can provide complementary interaction between the human operator and the system which can improve overall performance while reducing human error. Developing such interfaces can be difficult for a designer without explicit knowledge of Human Factors Engineering principles. The Multi-modal Interface Design Advisor (MIDA) was developed as a support tool for system designers and developers. It provides design recommendations based upon a combination of Human Factors principles, a knowledge base of historical research, and current interface technologies. MIDA's primary objective is to optimize available multi-modal technologies within a human computer interface in order to balance operator workload with efficient operator performance. The purpose of this paper is to demonstrate MIDA and illustrate its value as a design evaluation tool within the nuclear power industry. (authors)

  19. Clinical use of digital retrospective image fusion of CT, MRI, FDG-PET and SPECT - fields of indications and results; Klinischer Einsatz der digitalen retrospektiven Bildfusion von CT, MRT, FDG-PET und SPECT - Anwendungsgebiete und Ergebnisse

    Energy Technology Data Exchange (ETDEWEB)

    Lemke, A.J.; Niehues, S.M.; Amthauer, H.; Felix, R. [Campus Virchow-Klinikum, Klinik fuer Strahlenheilkunde, Charite, Universitaetsmedizin Berlin (Germany); Rohlfing, T. [Dept. of Neurosurgery, Stanford Univ. (United States); Hosten, N. [Inst. fuer Diagnostische Radiologie, Ernst-Moritz-Arndt-Univ. Greifswald (Germany)

    2004-12-01

    Purpose: To evaluate the feasibility and the clinical benefits of retrospective digital image fusion (PET, SPECT, CT and MRI). Materials and methods: In a prospective study, a total of 273 image fusions were performed and evaluated. The underlying image acquisitions (CT, MRI, SPECT and PET) were performed in a way appropriate for the respective clinical question and anatomical region. Image fusion was executed with a software program developed during this study. The results of the image fusion procedure were evaluated in terms of technical feasibility, clinical objective, and therapeutic impact. Results: The most frequent combinations of modalities were CT/PET (n = 156) and MRI/PET (n = 59), followed by MRI/SPECT (n = 28), CT/SPECT (n = 22) and CT/MRI (n = 8). The clinical questions included following regions (more than one region per case possible): neurocranium (n = 42), neck (n = 13), lung and mediastinum (n = 24), abdomen (n = 181), and pelvis (n = 65). In 92.6% of all cases (n = 253), image fusion was technically successful. Image fusion was able to improve sensitivity and specificity of the single modality, or to add important diagnostic information. Image fusion was problematic in cases of different body positions between the two imaging modalities or different positions of mobile organs. In 37.9% of the cases, image fusion added clinically relevant information compared to the single modality. Conclusion: For clinical questions concerning liver, pancreas, rectum, neck, or neurocranium, image fusion is a reliable method suitable for routine clinical application. Organ motion still limits its feasibility and routine use in other areas (e.g., thorax). (orig.)

  20. Research on segmentation based on multi-atlas in brain MR image

    Science.gov (United States)

    Qian, Yuejing

    2018-03-01

    Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.

  1. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    Science.gov (United States)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  2. A novel APD-based detector module for multi-modality PET/SPECT/CT scanners

    International Nuclear Information System (INIS)

    Saoudi, A.; Lecomte, R.

    1999-01-01

    The lack of anatomical information in SPECT and PET images is one of the major factors limiting the ability to localize and accurately quantify radionuclide uptake in small regions of interest. This problem could be resolved by using multi-modality scanners having the capability to acquire anatomical and functional images simultaneously. The feasibility of a novel detector suitable for measuring high-energy annihilation radiation in PET, medium-energy γ-rays in SPECT and low-energy X-rays in transmission CT is demonstrated and its performance is evaluated for potential use in multi-modality PET/SPECT/CT imaging. The proposed detector consists of a thin CsI(Tl) scintillator sitting on top of a deep GSO/LSO pair read out by an avalanche photodiode. The GSO/LOS pair provides depth-of-interaction information for 511 keV detection in PET, while the thin CsI(Tl) that is essentially transparent to annihilation radiation is used for detecting lower energy X- and γ-rays. The detector performance is compared to that of an LSO/YSO phoswich. Although the implementation of the proposed GSO/LSO/CsI(Tl) detector raises special problems that increase complexity, it generally outperforms the LSO/YSO phoswich for simultaneous PET, SPECT and CT imaging

  3. Clinical assessment of SPECT/CT co-registration image fusion

    International Nuclear Information System (INIS)

    Zhou Wen; Luan Zhaosheng; Peng Yong

    2004-01-01

    not confirmed by planner imaging were defined, 8 of them were eliminated. In 18 LUNG MAA-SPECT +CT image fusion, 10 of 11 pulmonary-embolism patients were inspected, multi-embolism patents is 8, single-embolism is 3. 8 patients of them were inspected by V/Q pulmonary planer imaging. The diagnosis for 12/18 patients were similar in LUNG MAA-SPECT +CT image fusion and V/O pulmonary planer imaging. Conclusion: Co-registration SPECT/CT image fusion could over-come the un-certain localization in single emission imaging, and avoid the deviation caused by different imaging time and space in separate - registration imaging fusion. At the moment of increasing the accuracy of location, doctors could accepted more information about anatomic, metabolism and functions. So SPECT/CT Co-registration image fusion can reject the false-positive, increase the sensibility of finding smaller foci, and enhance the veracity of diagnosis. This technical is simple , credibility and practicality, and will be have a good further for nuclear imaging in the coming years. SPECT/CT (authors)

  4. Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain

    International Nuclear Information System (INIS)

    Fiedler, E.; Platsch, G.; Schwarz, A.; Schmiedehausen, K.; Kuwert, T.; Tomandl, B.; Huk, W.; Rupprecht, Th.; Rahn, N.

    2003-01-01

    Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and method: In 32 patients regional cerebral blood flow was measured using 99m Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3 D-T1 w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use. (orig.) [de

  5. Remote sensing image fusion

    CERN Document Server

    Alparone, Luciano; Baronti, Stefano; Garzelli, Andrea

    2015-01-01

    A synthesis of more than ten years of experience, Remote Sensing Image Fusion covers methods specifically designed for remote sensing imagery. The authors supply a comprehensive classification system and rigorous mathematical description of advanced and state-of-the-art methods for pansharpening of multispectral images, fusion of hyperspectral and panchromatic images, and fusion of data from heterogeneous sensors such as optical and synthetic aperture radar (SAR) images and integration of thermal and visible/near-infrared images. They also explore new trends of signal/image processing, such as

  6. Image fusion and denoising using fractional-order gradient information

    DEFF Research Database (Denmark)

    Mei, Jin-Jin; Dong, Yiqiu; Huang, Ting-Zhu

    Image fusion and denoising are significant in image processing because of the availability of multi-sensor and the presence of the noise. The first-order and second-order gradient information have been effectively applied to deal with fusing the noiseless source images. In this paper, due to the adv...... show that the proposed method outperforms the conventional total variation in methods for simultaneously fusing and denoising....

  7. Linked statistical shape models for multi-modal segmentation: application to prostate CT-MR segmentation in radiotherapy planning

    Science.gov (United States)

    Chowdhury, Najeeb; Chappelow, Jonathan; Toth, Robert; Kim, Sung; Hahn, Stephen; Vapiwala, Neha; Lin, Haibo; Both, Stefan; Madabhushi, Anant

    2011-03-01

    We present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate delineations of a SOI's boundary on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. We apply the LSSM in the context of multi-modal prostate segmentation for radiotherapy planning, where we segment the prostate on MRI and CT simultaneously. Prostate capsule segmentation is a critical step in prostate radiotherapy planning, where dose plans have to be formulated on CT. Since accurate delineations of the prostate boundary are very difficult to obtain on CT, pre-treatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to do compared to CT. Hence, our framework incorporates multi-modal registration of MRI and CT to map 2D boundary delineations of prostate (obtained from an expert radiation oncologist) on MR training images onto corresponding CT images. The delineations of the prostate capsule on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. We acquired 7 MRI-CT patient studies and used the leave-one-out strategy to train and evaluate our LSSM (fLSSM), built using expert ground truth delineations on MRI and MRI-CT fusion derived capsule delineations on CT. A unique attribute of our fLSSM is that it does not require expert delineations of the capsule on CT. In order to perform prostate MRI segmentation using the fLSSM, we employed a regionbased approach where we deformed the evolving prostate boundary to optimize a mutual information based cost criterion, which took into account region-based intensity statistics of the image being segmented. The final prostate segmentation was then

  8. [Image fusion in medical radiology].

    Science.gov (United States)

    Burger, C

    1996-07-20

    Image fusion supports the correlation between images of two or more studies of the same organ. First, the effect of differing geometries during image acquisitions, such as a head tilt, is compensated for. As a consequence, congruent images can easily be obtained. Instead of merely putting them side by side in a static manner and burdening the radiologist with the whole correlation task, image fusion supports him with interactive visualization techniques. This is especially worthwhile for small lesions as they can be more precisely located. Image fusion is feasible today. Easy and robust techniques are readily available, and furthermore DICOM, a rapidly evolving data exchange standard, diminishes the once severe compatibility problems for image data originating from systems of different manufacturers. However, the current solutions for image fusion are not yet established enough for a high throughput of fusion studies. Thus, for the time being image fusion is most appropriately confined to clinical research studies.

  9. Multi-modality PET-CT imaging of breast cancer in an animal model using nanoparticle x-ray contrast agent and 18F-FDG

    Science.gov (United States)

    Badea, C. T.; Ghaghada, K.; Espinosa, G.; Strong, L.; Annapragada, A.

    2011-03-01

    Multi-modality PET-CT imaging is playing an important role in the field of oncology. While PET imaging facilitates functional interrogation of tumor status, the use of CT imaging is primarily limited to anatomical reference. In an attempt to extract comprehensive information about tumor cells and its microenvironment, we used a nanoparticle xray contrast agent to image tumor vasculature and vessel 'leakiness' and 18F-FDG to investigate the metabolic status of tumor cells. In vivo PET/CT studies were performed in mice implanted with 4T1 mammary breast cancer cells.Early-phase micro-CT imaging enabled visualization 3D vascular architecture of the tumors whereas delayedphase micro-CT demonstrated highly permeable vessels as evident by nanoparticle accumulation within the tumor. Both imaging modalities demonstrated the presence of a necrotic core as indicated by a hypo-enhanced region in the center of the tumor. At early time-points, the CT-derived fractional blood volume did not correlate with 18F-FDG uptake. At delayed time-points, the tumor enhancement in 18F-FDG micro-PET images correlated with the delayed signal enhanced due to nanoparticle extravasation seen in CT images. The proposed hybrid imaging approach could be used to better understand tumor angiogenesis and to be the basis for monitoring and evaluating anti-angiogenic and nano-chemotherapies.

  10. APPLICATION OF FUSION WITH SAR AND OPTICAL IMAGES IN LAND USE CLASSIFICATION BASED ON SVM

    Directory of Open Access Journals (Sweden)

    C. Bao

    2012-07-01

    Full Text Available As the increment of remote sensing data with multi-space resolution, multi-spectral resolution and multi-source, data fusion technologies have been widely used in geological fields. Synthetic Aperture Radar (SAR and optical camera are two most common sensors presently. The multi-spectral optical images express spectral features of ground objects, while SAR images express backscatter information. Accuracy of the image classification could be effectively improved fusing the two kinds of images. In this paper, Terra SAR-X images and ALOS multi-spectral images were fused for land use classification. After preprocess such as geometric rectification, radiometric rectification noise suppression and so on, the two kind images were fused, and then SVM model identification method was used for land use classification. Two different fusion methods were used, one is joining SAR image into multi-spectral images as one band, and the other is direct fusing the two kind images. The former one can raise the resolution and reserve the texture information, and the latter can reserve spectral feature information and improve capability of identifying different features. The experiment results showed that accuracy of classification using fused images is better than only using multi-spectral images. Accuracy of classification about roads, habitation and water bodies was significantly improved. Compared to traditional classification method, the method of this paper for fused images with SVM classifier could achieve better results in identifying complicated land use classes, especially for small pieces ground features.

  11. Semiparametric score level fusion: Gaussian copula approach

    NARCIS (Netherlands)

    Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the

  12. Oriented Edge-Based Feature Descriptor for Multi-Sensor Image Alignment and Enhancement

    Directory of Open Access Journals (Sweden)

    Myung-Ho Ju

    2013-10-01

    Full Text Available In this paper, we present an efficient image alignment and enhancement method for multi-sensor images. The shape of the object captured in a multi-sensor images can be determined by comparing variability of contrast using corresponding edges across multi-sensor image. Using this cue, we construct a robust feature descriptor based on the magnitudes of the oriented edges. Our proposed method enables fast image alignment by identifying matching features in multi-sensor images. We enhance the aligned multi-sensor images through the fusion of the salient regions from each image. The results of stitching the multi-sensor images and their enhancement demonstrate that our proposed method can align and enhance multi-sensor images more efficiently than previous methods.

  13. Ultrasound and PET-CT image fusion for prostate brachytherapy image guidance

    International Nuclear Information System (INIS)

    Hasford, F.

    2015-01-01

    Fusion of medical images between different cross-sectional modalities is widely used, mostly where functional images are fused with anatomical data. Ultrasound has for some time now been the standard imaging technique used for treatment planning of prostate cancer cases. While this approach is laudable and has yielded some positive results, latest developments have been the integration of images from ultrasound and other modalities such as PET-CT to compliment missing properties of ultrasound images. This study has sought to enhance diagnosis and treatment of prostate cancers by developing MATLAB algorithms to fuse ultrasound and PET-CT images. The fused ultrasound-PET-CT image has shown to contain improved quality of information than the individual input images. The fused image has the property of reduced uncertainty, increased reliability, robust system performance, and compact representation of information. The objective of co-registering the ultrasound and PET-CT images was achieved by conducting performance evaluation of the ultrasound and PET-CT imaging systems, developing image contrast enhancement algorithm, developing MATLAB image fusion algorithm, and assessing accuracy of the fusion algorithm. Performance evaluation of the ultrasound brachytherapy system produced satisfactory results in accordance with set tolerances as recommended by AAPM TG 128. Using an ultrasound brachytherapy quality assurance phantom, average axial distance measurement of 10.11 ± 0.11 mm was estimated. Average lateral distance measurements of 10.08 ± 0.07 mm, 20.01 ± 0.06 mm, 29.89 ± 0.03 mm and 39.84 ± 0.37 mm were estimated for the inter-target distances corresponding to 10 mm, 20 mm, 30 mm and 40 mm respectively. Volume accuracy assessment produced measurements of 3.97 cm 3 , 8.86 cm 3 and 20.11 cm 3 for known standard volumes of 4 cm 3 , 9 cm 3 and 20 cm 3 respectively. Depth of penetration assessment of the ultrasound system produced an estimate of 5.37 ± 0.02 cm

  14. FUSION OF MULTI-SCALE DEMS FROM DESCENT AND NAVCM IMAGES OF CHANG’E-3 USING COMPRESSED SENSING METHOD

    Directory of Open Access Journals (Sweden)

    M. Peng

    2017-07-01

    Full Text Available The multi-source DEMs generated using the images acquired in the descent and landing phase and after landing contain supplementary information, and this makes it possible and beneficial to produce a higher-quality DEM through fusing the multi-scale DEMs. The proposed fusion method consists of three steps. First, source DEMs are split into small DEM patches, then the DEM patches are classified into a few groups by local density peaks clustering. Next, the grouped DEM patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP algorithm is used to achieve sparse representation. We use the real DEMs generated from Chang’e-3 descent images and navigation camera (Navcam stereo images to validate the proposed method. Through the experiments, we have reconstructed a seamless DEM with the highest resolution and the largest spatial coverage among the input data. The experimental results demonstrated the feasibility of the proposed method.

  15. Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.

    Science.gov (United States)

    Reena Benjamin, J; Jayasree, T

    2018-02-01

    In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.

  16. Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC

    Directory of Open Access Journals (Sweden)

    Ángel Jesús Molina-Viedma

    2018-02-01

    Full Text Available The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials.

  17. Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC.

    Science.gov (United States)

    Molina-Viedma, Ángel Jesús; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A

    2018-02-05

    The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC) is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials.

  18. Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC

    Science.gov (United States)

    López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.

    2018-01-01

    The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC) is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials. PMID:29401725

  19. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    Science.gov (United States)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  20. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    Science.gov (United States)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  1. A Single Rod Multi-modality Multi-interface Level Sensor Using an AC Current Source

    Directory of Open Access Journals (Sweden)

    Abdulgader Hwili

    2008-05-01

    Full Text Available Crude oil separation is an important process in the oil industry. To make efficient use of the separators, it is important to know their internal behaviour, and to measure the levels of multi-interfaces between different materials, such as gas-foam, foam-oil, oil-emulsion, emulsion-water and water-solids. A single-rod multi-modality multi-interface level sensor is presented, which has a current source, and electromagnetic modalities. Some key issues have been addressed, including the effect of salt content and temperature i.e. conductivity on the measurement.

  2. Security of nuclear materials using fusion multi sensor wavelett

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho

    2010-01-01

    Security of a nuclear material in an installation is determined by how far the installation is to assure that nuclear material remains at a predetermined location. This paper observed a preliminary design on nuclear material tracking system in the installation for decision making support based on multi sensor fusion that is reliable and accurate to ensure that the nuclear material remains inside the control area. Capability on decision making in the Management Information System is represented by an understanding of perception in the third level of abstraction. The second level will be achieved with the support of image analysis and organizing data. The first level of abstraction is constructed by merger between several CCD camera sensors distributed in a building in a data fusion representation. Data fusion is processed based on Wavelett approach. Simulation utilizing Matlab programming shows that Wavelett fuses multi information from sensors as well. Hope that when the nuclear material out of control regions which have been predetermined before, there will arise a warning alarm and a message in the Management Information System display. Thus the nuclear material movement time event can be obtained and tracked as well. (author)

  3. Multi-Modal Intelligent Traffic Signal Systems GPS

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  4. Bedside functional brain imaging in critically-ill children using high-density EEG source modeling and multi-modal sensory stimulation

    Directory of Open Access Journals (Sweden)

    Danny Eytan

    2016-01-01

    Full Text Available Acute brain injury is a common cause of death and critical illness in children and young adults. Fundamental management focuses on early characterization of the extent of injury and optimizing recovery by preventing secondary damage during the days following the primary injury. Currently, bedside technology for measuring neurological function is mainly limited to using electroencephalography (EEG for detection of seizures and encephalopathic features, and evoked potentials. We present a proof of concept study in patients with acute brain injury in the intensive care setting, featuring a bedside functional imaging set-up designed to map cortical brain activation patterns by combining high density EEG recordings, multi-modal sensory stimulation (auditory, visual, and somatosensory, and EEG source modeling. Use of source-modeling allows for examination of spatiotemporal activation patterns at the cortical region level as opposed to the traditional scalp potential maps. The application of this system in both healthy and brain-injured participants is demonstrated with modality-specific source-reconstructed cortical activation patterns. By combining stimulation obtained with different modalities, most of the cortical surface can be monitored for changes in functional activation without having to physically transport the subject to an imaging suite. The results in patients in an intensive care setting with anatomically well-defined brain lesions suggest a topographic association between their injuries and activation patterns. Moreover, we report the reproducible application of a protocol examining a higher-level cortical processing with an auditory oddball paradigm involving presentation of the patient's own name. This study reports the first successful application of a bedside functional brain mapping tool in the intensive care setting. This application has the potential to provide clinicians with an additional dimension of information to manage

  5. Three-way (N-way) fusion of brain imaging data based on mCCA+jICA and its application to discriminating schizophrenia

    NARCIS (Netherlands)

    J. Sui (Jing); H. He (Hao); G. Pearlson (Godfrey); T. Adali (Tülay); K.A. Kiehl (Kent ); Q. Yu (Qingbao); V.P. Clark; E. Castro (Elena); T.J.H. White (Tonya); B.A. Mueller (Bryon ); B.C. Ho (Beng ); N.C. Andreasen; V.D. Calhoun (Vince)

    2013-01-01

    textabstractMultimodal fusion is an effective approach to better understand brain diseases. However, most such instances have been limited to pair-wise fusion; because there are often more than two imaging modalities available per subject, there is a need for approaches that can combine multiple

  6. Multi-modal image registration: matching MRI with histology

    Science.gov (United States)

    Alic, Lejla; Haeck, Joost C.; Klein, Stefan; Bol, Karin; van Tiel, Sandra T.; Wielopolski, Piotr A.; Bijster, Magda; Niessen, Wiro J.; Bernsen, Monique; Veenland, Jifke F.; de Jong, Marion

    2010-03-01

    Spatial correspondence between histology and multi sequence MRI can provide information about the capabilities of non-invasive imaging to characterize cancerous tissue. However, shrinkage and deformation occurring during the excision of the tumor and the histological processing complicate the co registration of MR images with histological sections. This work proposes a methodology to establish a detailed 3D relation between histology sections and in vivo MRI tumor data. The key features of the methodology are a very dense histological sampling (up to 100 histology slices per tumor), mutual information based non-rigid B-spline registration, the utilization of the whole 3D data sets, and the exploitation of an intermediate ex vivo MRI. In this proof of concept paper, the methodology was applied to one tumor. We found that, after registration, the visual alignment of tumor borders and internal structures was fairly accurate. Utilizing the intermediate ex vivo MRI, it was possible to account for changes caused by the excision of the tumor: we observed a tumor expansion of 20%. Also the effects of fixation, dehydration and histological sectioning could be determined: 26% shrinkage of the tumor was found. The annotation of viable tissue, performed in histology and transformed to the in vivo MRI, matched clearly with high intensity regions in MRI. With this methodology, histological annotation can be directly related to the corresponding in vivo MRI. This is a vital step for the evaluation of the feasibility of multi-spectral MRI to depict histological groundtruth.

  7. The research progress of dual-modality probes for molecular imaging

    International Nuclear Information System (INIS)

    Cao Feng; Chen Yue

    2010-01-01

    Various imaging modalities have been exploited to investigate the anatomic or functional dissemination of tissues in the body. However, no single imaging modality allows overall structural, functional, and molecular information as each imaging modality has its own unique strengths and weaknesses. The combination of two imaging modalities that investigates the strengths of different methods might offer the prospect of improved diagnostic abilities. As more and more dual-modality imaging system have become clinically adopted, significant progress has been made toward the creation of dual-modality imaging probes, which can be used as novel tools for future multimodality systems. These all-in-one probes take full advantage of two different imaging modalities and could provide comprehensive information for clinical diagnostics. This review discusses the advantages and challenges in developing dual-modality imaging probes. (authors)

  8. Image Fusion Technologies In Commercial Remote Sensing Packages

    OpenAIRE

    Al-Wassai, Firouz Abdullah; Kalyankar, N. V.

    2013-01-01

    Several remote sensing software packages are used to the explicit purpose of analyzing and visualizing remotely sensed data, with the developing of remote sensing sensor technologies from last ten years. Accord-ing to literature, the remote sensing is still the lack of software tools for effective information extraction from remote sensing data. So, this paper provides a state-of-art of multi-sensor image fusion technologies as well as review on the quality evaluation of the single image or f...

  9. Coronary plaque morphology on multi-modality imagining and periprocedural myocardial infarction after percutaneous coronary intervention

    Directory of Open Access Journals (Sweden)

    Akira Sato

    2016-06-01

    Full Text Available Percutaneous coronary intervention (PCI may be complicated by periprocedural myocardial infarction (PMI as manifested by elevated cardiac biomarkers such as creatine kinase (CK-MB or troponin T. The occurrence of PMI has been shown to be associated with worse short- and long-term clinical outcome. However, recent studies suggest that PMI defined by biomarker levels alone is a marker of atherosclerosis burden and procedural complexity but in most cases does not have independent prognostic significance. Diagnostic multi-modality imaging such as intravascular ultrasound, optical coherence tomography, coronary angioscopy, near-infrared spectroscopy, multidetector computed tomography, and magnetic resonance imaging can be used to closely investigate the atherosclerotic lesion in order to detect morphological markers of unstable and vulnerable plaques in the patients undergoing PCI. With the improvement of technical aspects of multimodality coronary imaging, clinical practice and research are increasingly shifting toward defining the clinical implication of plaque morphology and patients outcomes. There were numerous published data regarding the relationship between pre-PCI lesion subsets on multi-modality imaging and post-PCI biomarker levels. In this review, we discuss the relationship between coronary plaque morphology estimated by invasive or noninvasive coronary imaging and the occurrence of PMI. Furthermore, this review underlies that the value of the multimodality coronary imaging approach will become the gold standard for invasive or noninvasive prediction of PMI in clinical practice.

  10. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    Science.gov (United States)

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  11. Fusion of imaging and nonimaging data for surveillance aircraft

    Science.gov (United States)

    Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre

    1997-06-01

    This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).

  12. Automatic structural parcellation of mouse brain MRI using multi-atlas label fusion.

    Directory of Open Access Journals (Sweden)

    Da Ma

    Full Text Available Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework.

  13. Performance evaluation of multi-sensor data fusion technique for ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Multi-sensor data fusion; Test Range application; trajectory .... Kalman filtering technique utilizes the noise statistics of the underlying system under con- ..... Hall D L 1992 Mathematical techniques in multi-sensor data fusion (Boston, MA: ...

  14. Cy5.5 conjugated MnO nanoparticles for magnetic resonance/near-infrared fluorescence dual-modal imaging of brain gliomas.

    Science.gov (United States)

    Chen, Ning; Shao, Chen; Li, Shuai; Wang, Zihao; Qu, Yanming; Gu, Wei; Yu, Chunjiang; Ye, Ling

    2015-11-01

    The fusion of molecular and anatomical modalities facilitates more reliable and accurate detection of tumors. Herein, we prepared the PEG-Cy5.5 conjugated MnO nanoparticles (MnO-PEG-Cy5.5 NPs) with magnetic resonance (MR) and near-infrared fluorescence (NIRF) imaging modalities. The applicability of MnO-PEG-Cy5.5 NPs as a dual-modal (MR/NIRF) imaging nanoprobe for the detection of brain gliomas was investigated. In vivo MR contrast enhancement of the MnO-PEG-Cy5.5 nanoprobe in the tumor region was demonstrated. Meanwhile, whole-body NIRF imaging of glioma bearing nude mouse exhibited distinct tumor localization upon injection of MnO-PEG-Cy5.5 NPs. Moreover, ex vivo CLSM imaging of the brain slice hosting glioma indicated the preferential accumulation of MnO-PEG-Cy5.5 NPs in the glioma region. Our results therefore demonstrated the potential of MnO-PEG-Cy5.5 NPs as a dual-modal (MR/NIRF) imaging nanoprobe in improving the diagnostic efficacy by simultaneously providing anatomical information from deep inside the body and more sensitive information at the cellular level. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Multi-sensor radiation detection, imaging, and fusion

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, Kai [Department of Nuclear Engineering, University of California, Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-01-01

    Glenn Knoll was one of the leaders in the field of radiation detection and measurements and shaped this field through his outstanding scientific and technical contributions, as a teacher, his personality, and his textbook. His Radiation Detection and Measurement book guided me in my studies and is now the textbook in my classes in the Department of Nuclear Engineering at UC Berkeley. In the spirit of Glenn, I will provide an overview of our activities at the Berkeley Applied Nuclear Physics program reflecting some of the breadth of radiation detection technologies and their applications ranging from fundamental studies in physics to biomedical imaging and to nuclear security. I will conclude with a discussion of our Berkeley Radwatch and Resilient Communities activities as a result of the events at the Dai-ichi nuclear power plant in Fukushima, Japan more than 4 years ago. - Highlights: • .Electron-tracking based gamma-ray momentum reconstruction. • .3D volumetric and 3D scene fusion gamma-ray imaging. • .Nuclear Street View integrates and associates nuclear radiation features with specific objects in the environment. • Institute for Resilient Communities combines science, education, and communities to minimize impact of disastrous events.

  16. 3D reconstruction from multi-view VHR-satellite images in MicMac

    Science.gov (United States)

    Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur

    2018-05-01

    This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.

  17. VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.

    Science.gov (United States)

    Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro

    2016-01-01

    In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the

  18. Multi-modal MRI of mild traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Ponnada A. Narayana

    2015-01-01

    Full Text Available Multi-modal magnetic resonance imaging (MRI that included high resolution structural imaging, diffusion tensor imaging (DTI, magnetization transfer ratio (MTR imaging, and magnetic resonance spectroscopic imaging (MRSI were performed in mild traumatic brain injury (mTBI patients with negative computed tomographic scans and in an orthopedic-injured (OI group without concomitant injury to the brain. The OI group served as a comparison group for mTBI. MRI scans were performed both in the acute phase of injury (~24 h and at follow-up (~90 days. DTI data was analyzed using tract based spatial statistics (TBSS. Global and regional atrophies were calculated using tensor-based morphometry (TBM. MTR values were calculated using the standard method. MRSI was analyzed using LC Model. At the initial scan, the mean diffusivity (MD was significantly higher in the mTBI cohort relative to the comparison group in several white matter (WM regions that included internal capsule, external capsule, superior corona radiata, anterior corona radiata, posterior corona radiata, inferior fronto-occipital fasciculus, inferior longitudinal fasciculus, forceps major and forceps minor of the corpus callosum, superior longitudinal fasciculus, and corticospinal tract in the right hemisphere. TBSS analysis failed to detect significant differences in any DTI measures between the initial and follow-up scans either in the mTBI or OI group. No significant differences were found in MRSI, MTR or morphometry between the mTBI and OI cohorts either at the initial or follow-up scans with or without family wise error (FWE correction. Our study suggests that a number of WM tracts are affected in mTBI in the acute phase of injury and that these changes disappear by 90 days. This study also suggests that none of the MRI-modalities used in this study, with the exception of DTI, is sensitive in detecting changes in the acute phase of mTBI.

  19. Fusion imaging of computed tomographic pulmonary angiography and SPECT ventilation/perfusion scintigraphy: initial experience and potential benefit

    International Nuclear Information System (INIS)

    Harris, Benjamin; Bailey, Dale; Roach, Paul; Bailey, Elizabeth; King, Gregory

    2007-01-01

    The objective of this study was to examine the feasibility of fusing ventilation and perfusion data from single-photon emission computed tomography (SPECT) ventilation perfusion (V/Q) scintigraphy together with computed tomographic pulmonary angiography (CTPA) data. We sought to determine the accuracy of this fusion process. In addition, we correlated the findings of this technique with the final clinical diagnosis. Thirty consecutive patients (17 female, 13 male) who had undergone both CTPA and SPECT V/Q scintigraphy during their admission for investigation of potential pulmonary embolism were identified retrospectively. Image datasets from these two modalities were co-registered and fused using commercial software. Accuracy of the fusion process was determined subjectively by correlation between modalities of the anatomical boundaries and co-existent pleuro-parenchymal abnormalities. In all 30 cases, SPECT V/Q images were accurately fused with CTPA images. An automated registration algorithm was sufficient alone in 23 cases (77%). Additional linear z-axis scaling was applied in seven cases. There was accurate topographical co-localisation of vascular, parenchymal and pleural disease on the fused images. Nine patients who had positive CTPA performed as an initial investigation had co-localised perfusion defects on the subsequent fused CTPA/SPECT images. Three of the 11 V/Q scans initially reported as intermediate could be reinterpreted as low probability owing to co-localisation of defects with parenchymal or pleural pathology. Accurate fusion of SPECT V/Q scintigraphy to CTPA images is possible. This technique may be clinically useful in patients who have non-diagnostic initial investigations or in whom corroborative imaging is sought. (orig.)

  20. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  1. Dense range images from sparse point clouds using multi-scale processing

    NARCIS (Netherlands)

    Do, Q.L.; Ma, L.; With, de P.H.N.

    2013-01-01

    Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate highaccuracy dense range images from sparse point clouds to facilitate such

  2. Making Faces - State-Space Models Applied to Multi-Modal Signal Processing

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue

    2005-01-01

    The two main focus areas of this thesis are State-Space Models and multi modal signal processing. The general State-Space Model is investigated and an addition to the class of sequential sampling methods is proposed. This new algorithm is denoted as the Parzen Particle Filter. Furthermore...... optimizer can be applied to speed up convergence. The linear version of the State-Space Model, the Kalman Filter, is applied to multi modal signal processing. It is demonstrated how a State-Space Model can be used to map from speech to lip movements. Besides the State-Space Model and the multi modal...... application an information theoretic vector quantizer is also proposed. Based on interactions between particles, it is shown how a quantizing scheme based on an analytic cost function can be derived....

  3. Multimodal Image Alignment via Linear Mapping between Feature Modalities.

    Science.gov (United States)

    Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James

    2017-01-01

    We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.

  4. Implementation and applications of dual-modality imaging

    Science.gov (United States)

    Hasegawa, Bruce H.; Barber, William C.; Funk, Tobias; Hwang, Andrew B.; Taylor, Carmen; Sun, Mingshan; Seo, Youngho

    2004-06-01

    In medical diagnosis, functional or physiological data can be acquired using radionuclide imaging with positron emission tomography or with single-photon emission computed tomography. However, anatomical or structural data can be acquired using X-ray computed tomography. In dual-modality imaging, both radionuclide and X-ray detectors are incorporated in an imaging system to allow both functional and structural data to be acquired in a single procedure without removing the patient from the imaging system. In a clinical setting, dual-modality imaging systems commonly are used to localize radiopharmaceutical uptake with respect to the patient's anatomy. This helps the clinician to differentiate disease from regions of normal radiopharmaceutical accumulation, to improve diagnosis or cancer staging, or to facilitate planning for radiation therapy or surgery. While initial applications of dual-modality imaging were developed for clinical imaging on humans, it now is recognized that these systems have potentially important applications for imaging small animals involved in experimental studies including basic investigations of mammalian biology and development of new pharmaceuticals for diagnosis or treatment of disease.

  5. Generating description with multi-feature fusion and saliency maps of image

    Science.gov (United States)

    Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo

    2018-04-01

    Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.

  6. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    Science.gov (United States)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  7. Combined FDG PET/CT imaging for restaging of colorectal cancer patients: impact of image fusion on staging accuracy

    International Nuclear Information System (INIS)

    Strunk, H.; Jaeger, U.; Flacke, S.; Hortling, N.; Bucerius, J.; Joe, A.; Reinhardt, M.; Palmedo, H.

    2005-01-01

    Purpose: To evaluate the diagnostic impact of positron emission tomography (PET) with fluorine-18-labeled deoxy-D-glucose (FDG) combined with non-contrast computed tomography (CT) as PET-CT modality in restaging colorectal cancer patients. Material and methods: In this retrospective study, 29 consecutive patients with histologically proven colorectal cancer (17 female, 12 male, aged 51-76 years) underwent whole body scans in one session on a dual modality PET-CT system (Siemens Biograph) 90 min. after i.v. administration of 370 MBq 18 F-FDG. The CT imaging was performed with 40 mAs, 130 kV, slice-thickness 5 mm and without i.v. contrast administration. PET and CT images were reconstructed with a slice-thickness of 5 mm in coronal, sagittal and transverse planes. During a first step of analysis, PET and CT images were scored blinded and independently by a group of two nuclear medicine physicians and a group of two radiologists, respectively. For this purpose, a five-point-scale was used. The second step of data-analysis consisted of a consensus reading by both groups. During the consensus reading, first a virtual (meaning mental) fusion of PET and CT images and afterwards the 'real' fusion (meaning coregistered) PET-CT images were also scored with the same scale. The imaging results were compared with histopathology findings and the course of disease during further follow-up. Results: The total number of malignant lesions detected with the combined PET/CT were 86. For FDG-PET alone it was n=68, and for CT alone n=65. Comparing PET-CT and PET, concordance was found in 81 of 104 lesions. Discrepancies predominantly occurred in the lung, where PET alone often showed true positive results in lymph nodes and soft tissue masses, where CT often was false negative. Comparing mental fusion and 'real' co-registered images, concordance was found in 94 of 104 lesions. In 13 lesions or, respectively, in 7 of 29 patients, a relevant information was gathered using fused images

  8. Established rheumatoid arthritis - new imaging modalities

    DEFF Research Database (Denmark)

    McQueen, Fiona M; Østergaard, Mikkel

    2007-01-01

    in real-time and facilitates diagnostic and therapeutic interventions such as joint aspiration and injection. Exciting experimental modalities are also being developed with the potential to provide not just morphological but functional imaging. Techniques such as positron emission tomography (PET......) and high-resolution computerized tomography. Erosions are very clearly depicted using these modalities and MRI also allows imaging of soft tissues with assessment of joint inflammation. High-resolution ultrasound is a convenient clinical technique for the assessment of erosions, synovitis and tenosynovitis...

  9. A Standard Mammography Unit - Standard 3D Ultrasound Probe Fusion Prototype: First Results.

    Science.gov (United States)

    Schulz-Wendtland, Rüdiger; Jud, Sebastian M; Fasching, Peter A; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W; Emons, Julius

    2017-06-01

    The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound - the second important imaging modality in complementary breast diagnostics - without increasing examination time or requiring additional staff.

  10. Dual-Modality Prostate Imaging with PET and Transrectal Ultrasound

    Science.gov (United States)

    2011-09-01

    Downloaded on April 07,2010 at 23:12:26 UTC from IEEE Xplore . Restrictions apply. HUBER et al.: MULTI-MODALITY PHANTOM DEVELOPMENT 2723 As soon as...allow Authorized licensed use limited to: Lawrence Berkeley National Laboratory. Downloaded on April 07,2010 at 23:12:26 UTC from IEEE Xplore ...April 07,2010 at 23:12:26 UTC from IEEE Xplore . Restrictions apply. HUBER et al.: MULTI-MODALITY PHANTOM DEVELOPMENT 2725 Fig. 3. Reconstructed

  11. Feasibility of CBCT-based target and normal structure delineation in prostate cancer radiotherapy: Multi-observer and image multi-modality study

    International Nuclear Information System (INIS)

    Luetgendorf-Caucig, Carola; Fotina, Irina; Stock, Markus; Poetter, Richard; Goldner, Gregor; Georg, Dietmar

    2011-01-01

    Background and purpose: In-room cone-beam CT (CBCT) imaging and adaptive treatment strategies are promising methods to decrease target volumes and to spare organs at risk. The aim of this work was to analyze the inter-observer contouring uncertainties of target volumes and organs at risks (oars) in localized prostate cancer radiotherapy using CBCT images. Furthermore, CBCT contouring was benchmarked against other image modalities (CT, MR) and the influence of subjective image quality perception on inter-observer variability was assessed. Methods and materials: Eight prostate cancer patients were selected. Seven radiation oncologists contoured target volumes and oars on CT, MRI and CBCT. Volumes, coefficient of variation (COV), conformity index (cigen), and coordinates of center-of-mass (COM) were calculated for each patient and image modality. Reliability analysis was performed for the support of the reported findings. Subjective perception of image quality was assessed via a ten-scored visual analog scale (VAS). Results: The median volume for prostate was larger on CT compared to MRI and CBCT images. The inter-observer variation for prostate was larger on CBCT (CIgen = 0.57 ± 0.09, 0.61 reliability) compared to CT (CIgen = 0.72 ± 0.07, 0.83 reliability) and MRI (CIgen = 0.66 ± 0.12, 0.87 reliability). On all image modalities values of the intra-observer reliability coefficient (0.97 for CT, 0.99 for MR and 0.94 for CBCT) indicated high reproducibility of results. For all patients the root mean square (RMS) of the inter-observer standard deviation (σ) of the COM was largest on CBCT with σ(x) = 0.4 mm, σ(y) = 1.1 mm, and σ(z) = 1.7 mm. The concordance in delineating OARs was much stronger than for target volumes, with average CIgen > 0.70 for rectum and CIgen > 0.80 for bladder. Positive correlations between CIgen and VAS score of the image quality were observed for the prostate, seminal vesicles and rectum. Conclusions: Inter-observer variability for target

  12. Endoscopic tri-modal imaging for detection of early neoplasia in Barrett's oesophagus: a multi-centre feasibility study using high-resolution endoscopy, autofluorescence imaging and narrow band imaging incorporated in one endoscopy system

    NARCIS (Netherlands)

    Curvers, W. L.; Singh, R.; Song, L.-M. Wong-Kee; Wolfsen, H. C.; Ragunath, K.; Wang, K.; Wallace, M. B.; Fockens, P.; Bergman, J. J. G. H. M.

    2008-01-01

    OBJECTIVE: To investigate the diagnostic potential of endoscopic tri-modal imaging and the relative contribution of each imaging modality (i.e. high-resolution endoscopy (HRE), autofluorescence imaging (AFI) and narrow-band imaging (NBI)) for the detection of early neoplasia in Barrett's oesophagus.

  13. Multi-information fusion sparse coding with preserving local structure for hyperspectral image classification

    Science.gov (United States)

    Wei, Xiaohui; Zhu, Wen; Liao, Bo; Gu, Changlong; Li, Weibiao

    2017-10-01

    The key question of sparse coding (SC) is how to exploit the information that already exists to acquire the robust sparse representations (SRs) of distinguishing different objects for hyperspectral image (HSI) classification. We propose a multi-information fusion SC framework, which fuses the spectral, spatial, and label information in the same level, to solve the above question. In particular, pixels from disjointed spatial clusters, which are obtained by cutting the given HSI in space, are individually and sparsely encoded. Then, due to the importance of spatial structure, graph- and hypergraph-based regularizers are enforced to motivate the obtained representations smoothness and to preserve the local consistency for each spatial cluster. The latter simultaneously considers the spectrum, spatial, and label information of multiple pixels that have a great probability with the same label. Finally, a linear support vector machine is selected as the final classifier with the learned SRs as input. Experiments conducted on three frequently used real HSIs show that our methods can achieve satisfactory results compared with other state-of-the-art methods.

  14. Multi-Modal Traveler Information System - Gateway Functional Requirements

    Science.gov (United States)

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  15. Multi-modal and targeted imaging improves automated mid-brain segmentation

    Science.gov (United States)

    Plassard, Andrew J.; D'Haese, Pierre F.; Pallavaram, Srivatsan; Newton, Allen T.; Claassen, Daniel O.; Dawant, Benoit M.; Landman, Bennett A.

    2017-02-01

    The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7T are used, but it is not feasible to scan clinical patients in those scanners. Targeted imaging sequences at 3T such as F-GATIR, and other optimized inversion recovery sequences, have been presented which enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7T can be used to accurately segment these structures at 3T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice coefficient over 0.88 and a mean surface distance less than 1.0mm was achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a Dice over 0.75 and a mean surface distance less than 1.2mm was achieved using a combination of T1 and FGATIR imaging sequences. In the substantia nigra and sub-thalamic nucleus a Dice coefficient of over 0.6 and a mean surface distance of less than 1.0mm was achieved using the optimized inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together produced significantly improved segmentation results than any individual modality (p<0.05 wilcox sign-rank test).

  16. Multi modal child-to-child interaction

    DEFF Research Database (Denmark)

    Fisker, Tine Basse

    In this presentation the interaction and relation of three boys is analyzed using multi modal analysis. The analysis clearly, and surprisingly demonstrates that the boys interact via different modes and that they are able to handle several interaction partners at the same time. They co......-construct interaction in rather complex and unexpected ways using verbal as well as non-verbal modes in interaction....

  17. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.

    Science.gov (United States)

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D

    2010-05-15

    Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.

  18. Imaging modalities in radiation treatment planning of brain tumors

    International Nuclear Information System (INIS)

    Georgiev, D.

    2009-01-01

    The radiation therapy is a standard treatment after surgery for most of malignant and some of benignant brain tumors. The restriction in acquiring local tumor control is an inability in realization of high dose without causing radiation necrosis in irradiated area and sparing normal tissues. The development of imaging modalities during the last years is responsible for better treatment results and lower early and late toxicity. Essential is the role of image methods not only in the diagnosis and also in the precise anatomical (during last years also functional) localisation, spreading of the tumor, treatment planning process and the effects of the treatment. Target delineation is one of the great geometrical uncertainties in the treatment planning process. Early studies on the use of CT in treatment planning documented that tumor coverage without CT was clearly inadequate in 20% of the patients and marginal in another 27 %. The image fusion of CT, MBI and PET and also the use of contrast materia helps to get over those restrictions. The use of contrast material enhances the signal in 10 % of the patients with glioblastoma multiform and in a higher percentage of the patients with low-grade gliomas

  19. Contemporary Multi-Modal Historical Representations and the Teaching of Disciplinary Understandings in History

    Science.gov (United States)

    Donnelly, Debra J.

    2018-01-01

    Traditional privileging of the printed text has been considerably eroded by rapid technological advancement and in Australia, as elsewhere, many History teaching programs feature an array of multi-modal historical representations. Research suggests that engagement with the visual and multi-modal constructs has the potential to enrich the pedagogy…

  20. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    Science.gov (United States)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  1. Enterprise imaging and multi-departmental PACS

    International Nuclear Information System (INIS)

    Bergh, Bjoern

    2006-01-01

    The aim of this review is to present the status of digital image acquisition and archiving outside of radiology and to describe the technical concepts and possibilities of how a ''radiology'' Picture Archiving and Communication System (PACS) can become a multi-departmental (MD-)PACS. First the principles of system integration technology are explained and illustrated by the description of a typical radiology system integration. Then four types of modality integration approaches are defined: the direct modality integration (Type-I), the integration via DICOM acquisition software (Type-II) the integration via specialised systems either with (Type-III) or without PACS connection (Type-IV). The last section is dedicated to the presentation of the PACS requirements of selected interdisciplinary modality types [Endoscopy, Ultrasound and Electrocardiography (ECG)] and clinical disciplines (Pathology, Dermatology, Ophthalmology and Cardiology), which are then compared with the technical possibilities of a MD-PACS. (orig.)

  2. Development of a novel fusion imaging technique in the diagnosis of hepatobiliary-pancreatic lesions

    International Nuclear Information System (INIS)

    Soga, Koichi; Ochiai, Jun; Miyajima, Takashi; Kassai, Kyoichi; Itani, Kenji; Yagi, Nobuaki; Naito, Yuji

    2013-01-01

    Multi-row detector computed tomography (MDCT) and magnetic resonance cholangiopancreatography (MRCP) play an important role in the imaging diagnosis of hepatobiliary-pancreatic lesions. Here we investigated whether unifying the MDCT and MRCP images onto the same screen using fusion imaging could overcome the limitations of each technique, while still maintaining their benefits. Moreover, because reports of fusion imaging using MDCT and MRCP are rare, we assessed the benefits and limitations of this method for its potential application in a clinical setting. The patient group included 9 men and 11 women. Among the 20 patients, the final diagnoses were as follows: 10 intraductal papillary mucinous neoplasms, 5 biliary system carcinomas, 1 pancreatic adenocarcinoma and 5 non-neoplastic lesions. After transmitting the Digital Imaging and Communication in Medicine data of the MDCT and MRCP images to a workstation, we performed a 3-D organisation of both sets of images using volume rendering for the image fusion. Fusion imaging enabled clear identification of the spatial relationship between a hepatobiliary-pancreatic lesion and the solid viscera and/or vessels. Further, this method facilitated the determination of the relationship between the anatomical position of the lesion and its surroundings more easily than either MDCT or MRCP alone. Fusion imaging is an easy technique to perform and may be a useful tool for planning treatment strategies and for examining pathological changes in hepatobiliary-pancreatic lesions. Additionally, the ease of obtaining the 3-D images suggests the possibility of using these images to plan intervention strategies.

  3. Biomedical imaging modality classification using combined visual features and textual terms.

    Science.gov (United States)

    Han, Xian-Hua; Chen, Yen-Wei

    2011-01-01

    We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  4. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    Science.gov (United States)

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  5. Multi-Valued Modal Fixed Point Logics for Model Checking

    Science.gov (United States)

    Nishizawa, Koki

    In this paper, I will show how multi-valued logics are used for model checking. Model checking is an automatic technique to analyze correctness of hardware and software systems. A model checker is based on a temporal logic or a modal fixed point logic. That is to say, a system to be checked is formalized as a Kripke model, a property to be satisfied by the system is formalized as a temporal formula or a modal formula, and the model checker checks that the Kripke model satisfies the formula. Although most existing model checkers are based on 2-valued logics, recently new attempts have been made to extend the underlying logics of model checkers to multi-valued logics. I will summarize these new results.

  6. Automatic intra-modality brain image registration method

    International Nuclear Information System (INIS)

    Whitaker, J.M.; Ardekani, B.A.; Braun, M.

    1996-01-01

    Full text: Registration of 3D images of brain of the same or different subjects has potential importance in clinical diagnosis, treatment planning and neurological research. The broad aim of our work is to produce an automatic and robust intra-modality, brain image registration algorithm for intra-subject and inter-subject studies. Our algorithm is composed of two stages. Initial alignment is achieved by finding the values of nine transformation parameters (representing translation, rotation and scale) that minimise the nonoverlapping regions of the head. This is achieved by minimisation of the sum of the exclusive OR of two binary head images, produced using the head extraction procedure described by Ardekani et al. (J Comput Assist Tomogr, 19:613-623, 1995). The initial alignment successfully determines the scale parameters and gross translation and rotation parameters. Fine alignment uses an objective function described for inter-modality registration in Ardekani et al. (ibid.). The algorithm segments one of the images to be aligned into a set of connected components using K-means clustering. Registration is achieved by minimising the K-means variance of the segmentation induced in the other image. Similarity of images of the same modality makes the method attractive for intra-modality registration. A 3D MR image, with voxel dimensions, 2x2x6 mm, was misaligned. The registered image shows visually accurate registration. The average displacement of a pixel from its correct location was measured to be 3.3 mm. The algorithm was tested on intra-subject MR images and was found to produce good qualitative results. Using the data available, the algorithm produced promising qualitative results in intra-subject registration. Further work is necessary in its application to intersubject registration, due to large variability in brain structure between subjects. Clinical evaluation of the algorithm for selected applications is required

  7. [Application of 3D virtual reality technology with multi-modality fusion in resection of glioma located in central sulcus region].

    Science.gov (United States)

    Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F

    2018-05-08

    Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.

  8. Multifocus Image Fusion in Q-Shift DTCWT Domain Using Various Fusion Rules

    Directory of Open Access Journals (Sweden)

    Yingzhong Tian

    2016-01-01

    Full Text Available Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT. Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS and the Sum Modified Laplacian (SML. Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.

  9. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    González González, G.R.; Nijholt, Antinus

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply

  10. Reference Resolution in Multi-modal Interaction: Position paper

    NARCIS (Netherlands)

    Fernando, T.; Nijholt, Antinus

    2002-01-01

    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can

  11. A flexible data fusion architecture for persistent surveillance using ultra-low-power wireless sensor networks

    Science.gov (United States)

    Hanson, Jeffrey A.; McLaughlin, Keith L.; Sereno, Thomas J.

    2011-06-01

    We have developed a flexible, target-driven, multi-modal, physics-based fusion architecture that efficiently searches sensor detections for targets and rejects clutter while controlling the combinatoric problems that commonly arise in datadriven fusion systems. The informational constraints imposed by long lifetime requirements make systems vulnerable to false alarms. We demonstrate that our data fusion system significantly reduces false alarms while maintaining high sensitivity to threats. In addition, mission goals can vary substantially in terms of targets-of-interest, required characterization, acceptable latency, and false alarm rates. Our fusion architecture provides the flexibility to match these trade-offs with mission requirements unlike many conventional systems that require significant modifications for each new mission. We illustrate our data fusion performance with case studies that span many of the potential mission scenarios including border surveillance, base security, and infrastructure protection. In these studies, we deployed multi-modal sensor nodes - including geophones, magnetometers, accelerometers and PIR sensors - with low-power processing algorithms and low-bandwidth wireless mesh networking to create networks capable of multi-year operation. The results show our data fusion architecture maintains high sensitivities while suppressing most false alarms for a variety of environments and targets.

  12. Fusion-based multi-target tracking and localization for intelligent surveillance systems

    Science.gov (United States)

    Rababaah, Haroun; Shirkhodaie, Amir

    2008-04-01

    In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.

  13. Color Multifocus Image Fusion Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    S. Savić

    2013-11-01

    Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.

  14. Multi-modal neuroimaging in premanifest and early Huntington's disease: 18 month longitudinal data from the IMAGE-HD study.

    Science.gov (United States)

    Domínguez D, Juan F; Egan, Gary F; Gray, Marcus A; Poudel, Govinda R; Churchyard, Andrew; Chua, Phyllis; Stout, Julie C; Georgiou-Karistianis, Nellie

    2013-01-01

    IMAGE-HD is an Australian based multi-modal longitudinal magnetic resonance imaging (MRI) study in premanifest and early symptomatic Huntington's disease (pre-HD and symp-HD, respectively). In this investigation we sought to determine the sensitivity of imaging methods to detect macrostructural (volume) and microstructural (diffusivity) longitudinal change in HD. We used a 3T MRI scanner to acquire T1 and diffusion weighted images at baseline and 18 months in 31 pre-HD, 31 symp-HD and 29 controls. Volume was measured across the whole brain, and volume and diffusion measures were ascertained for caudate and putamen. We observed a range of significant volumetric and, for the first time, diffusion changes over 18 months in both pre-HD and symp-HD, relative to controls, detectable at the brain-wide level (volume change in grey and white matter) and in caudate and putamen (volume and diffusivity change). Importantly, longitudinal volume change in the caudate was the only measure that discriminated between groups across all stages of disease: far from diagnosis (>15 years), close to diagnosis (fractional anisotropy, FA), only longitudinal FA change was sensitive to group differences, but only after diagnosis. These findings further confirm caudate atrophy as one of the most sensitive and early biomarkers of neurodegeneration in HD. They also highlight that different tissue properties have varying schedules in their ability to discriminate between groups along disease progression and may therefore inform biomarker selection for future therapeutic interventions.

  15. Established rheumatoid arthritis - new imaging modalities

    DEFF Research Database (Denmark)

    McQueen, Fiona M; Østergaard, Mikkel

    2007-01-01

    New imaging modalities are assuming an increasingly important role in the investigation and management of rheumatoid arthritis. It is now possible to obtain information about all tissues within the joint in three dimensions using tomographic techniques such as magnetic resonance imaging (MRI...

  16. Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision

    Science.gov (United States)

    Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.

    2018-01-01

    The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.

  17. Image fusion for dynamic contrast enhanced magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Leach Martin O

    2004-10-01

    Full Text Available Abstract Background Multivariate imaging techniques such as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI have been shown to provide valuable information for medical diagnosis. Even though these techniques provide new information, integrating and evaluating the much wider range of information is a challenging task for the human observer. This task may be assisted with the use of image fusion algorithms. Methods In this paper, image fusion based on Kernel Principal Component Analysis (KPCA is proposed for the first time. It is demonstrated that a priori knowledge about the data domain can be easily incorporated into the parametrisation of the KPCA, leading to task-oriented visualisations of the multivariate data. The results of the fusion process are compared with those of the well-known and established standard linear Principal Component Analysis (PCA by means of temporal sequences of 3D MRI volumes from six patients who took part in a breast cancer screening study. Results The PCA and KPCA algorithms are able to integrate information from a sequence of MRI volumes into informative gray value or colour images. By incorporating a priori knowledge, the fusion process can be automated and optimised in order to visualise suspicious lesions with high contrast to normal tissue. Conclusion Our machine learning based image fusion approach maps the full signal space of a temporal DCE-MRI sequence to a single meaningful visualisation with good tissue/lesion contrast and thus supports the radiologist during manual image evaluation.

  18. Interactive, multi-modality image registrations for combined MRI/MRSI-planned HDR prostate brachytherapy

    Directory of Open Access Journals (Sweden)

    Galen Reed

    2011-03-01

    Full Text Available Purpose: This study presents the steps and criteria involved in the series of image registrations used clinically during the planning and dose delivery of focal high dose-rate (HDR brachytherapy of the prostate. Material and methods: Three imaging modalities – Magnetic Resonance Imaging (MRI, Magnetic Resonance Spectroscopic Imaging (MRSI, and Computed Tomography (CT – were used at different steps during the process. MRSI is used for identification of dominant intraprosatic lesions (DIL. A series of rigid and nonrigid transformations were applied to the data to correct for endorectal-coil-induced deformations and for alignment with the planning CT. Mutual information was calculated as a morphing metric. An inverse planning optimization algorithm was applied to boost dose to the DIL while providing protection to the urethra, penile bulb, rectum, and bladder. Six prostate cancer patients were treated using this protocol. Results: The morphing algorithm successfully modeled the probe-induced prostatic distortion. Mutual information calculated between the morphed images and images acquired without the endorectal probe showed a significant (p = 0.0071 increase to that calculated between the unmorphed images and images acquired without the endorectal probe. Both mutual information and visual inspection serve as effective diagnostics of image morphing. The entire procedure adds less than thirty minutes to the treatment planning. Conclusion: This work demonstrates the utility of image transformations and registrations to HDR brachytherapy of prostate cancer.

  19. Embedded security system for multi-modal surveillance in a railway carriage

    Science.gov (United States)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  20. Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms

    Directory of Open Access Journals (Sweden)

    Xian-Hua Han

    2011-01-01

    extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  1. Utilizing Multi-Modal Literacies in Middle Grades Science

    Science.gov (United States)

    Saurino, Dan; Ogletree, Tamra; Saurino, Penelope

    2010-01-01

    The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…

  2. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    Science.gov (United States)

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  3. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    Science.gov (United States)

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  4. Dim target detection method based on salient graph fusion

    Science.gov (United States)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  5. Multi-Modal Intelligent Traffic Signal Systems (MMITSS) Basic Safety Message

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  6. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  7. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    Science.gov (United States)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  8. Markerless registration for image guided surgery. Preoperative image, intraoperative video image, and patient

    International Nuclear Information System (INIS)

    Kihara, Tomohiko; Tanaka, Yuko

    1998-01-01

    Real-time and volumetric acquisition of X-ray CT, MR, and SPECT is the latest trend of the medical imaging devices. A clinical challenge is to use these multi-modality volumetric information complementary on patient in the entire diagnostic and surgical processes. The intraoperative image and patient integration intents to establish a common reference frame by image in diagnostic and surgical processes. This provides a quantitative measure during surgery, for which we have been relied mostly on doctors' skills and experiences. The intraoperative image and patient integration involves various technologies, however, we think one of the most important elements is the development of markerless registration, which should be efficient and applicable to the preoperative multi-modality data sets, intraoperative image, and patient. We developed a registration system which integrates preoperative multi-modality images, intraoperative video image, and patient. It consists of a real-time registration of video camera for intraoperative use, a markerless surface sampling matching of patient and image, our previous works of markerless multi-modality image registration of X-ray CT, MR, and SPECT, and an image synthesis on video image. We think these techniques can be used in many applications which involve video camera like devices such as video camera, microscope, and image Intensifier. (author)

  9. Applications of Novel X-Ray Imaging Modalities in Food Science

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Schou

    science for understanding and designing food products. In both of these aspects, X-ray imaging methods such as radiography and computed tomography provide a non-destructive solution. However, since the conventional attenuation-based modality suers from poor contrast in soft matter materials, modalities...... with improved contrast are needed. Two possible candidates in this regard are the novel X-ray phase-contrast and X-ray dark-eld imaging modalities. The contrast in phase-contrast imaging is based on dierences in electron density which is especially useful for soft matter materials whereas dark-eld imaging....... Furthermore, the process of translating the image in image analysis was addressed. For improved handling of multimodal image data, a multivariate segmentation scheme of multimodal X-ray tomography data was implemented. Finally, quantitative data analysis was applied for treating the images. Quantitative...

  10. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  11. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  12. Added Value of 3D Cardiac SPECT/CTA Fusion Imaging in Patients with Reversible Perfusion Defect on Myocardial Perfusion SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Eun Jung; Cho, Ihn Ho [Yeungnam University Hospital, Daegu (Korea, Republic of); Kang, Won Jun [Yonsei University Hospital, Seoul (Korea, Republic of); Kim, Seong Min [Chungnam National University Medical School and Hospital, Daejeon (Korea, Republic of); Won, Kyoung Sook [Keomyung University Dongsan Hospital, Daegu (Korea, Republic of); Lim, Seok Tae [Chonbuk National University Medical School and Hospital, Jeonju (Korea, Republic of); Hwang, Kyung Hoon [Gachon University Gil Hospital, Incheon (Korea, Republic of); Lee, Byeong Il; Bom, Hee Seung [Chonnam National University Medical School and Hospital, Gwangju (Korea, Republic of)

    2009-12-15

    Integration of the functional information of myocardial perfusion SPECT (MPS) and the morphoanatomical information of coronary CT angiography (CTA) may provide useful additional diagnostic information of the spatial relationship between perfusion defects and coronary stenosis. We studied to know the added value of three dimensional cardiac SPECT/CTA fusion imaging (fusion image) by comparing between fusion image and MPS. Forty-eight patients (M:F=26:22, Age: 63.3{+-}10.4 years) with a reversible perfusion defect on MPS (adenosine stress/rest SPECT with Tc-99m sestamibi or tetrofosmin) and CTA were included. Fusion images were molded and compared with the findings from the MPS. Invasive coronary angiography served as a reference standard for fusion image and MPS. Total 144 coronary arteries in 48 patients were analyzed; Fusion image yielded the sensitivity, specificity, negative and positive predictive value for the detection of hemodynamically significant stenosis per coronary artery 82.5%, 79.3%, 76.7% and 84.6%, respectively. Respective values for the MPS were 68.8%, 70.7%, 62.1% and 76.4%. And fusion image also could detect more multi-vessel disease. Fused three dimensional volume-rendered SPECT/CTA imaging provides intuitive convincing information about hemodynamic relevant lesion and could improved diagnostic accuracy.

  13. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  14. MMX-I: A data-processing software for multi-modal X-ray imaging and tomography

    International Nuclear Information System (INIS)

    Bergamaschi, A; Medjoubi, K; Somogyi, A; Messaoudi, C; Marco, S

    2017-01-01

    Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data. (paper)

  15. MMX-I: A data-processing software for multi-modal X-ray imaging and tomography

    Science.gov (United States)

    Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.

    2017-06-01

    Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.

  16. Multi-modal neuroimaging in premanifest and early Huntington's disease: 18 month longitudinal data from the IMAGE-HD study.

    Directory of Open Access Journals (Sweden)

    Juan F Domínguez D

    Full Text Available IMAGE-HD is an Australian based multi-modal longitudinal magnetic resonance imaging (MRI study in premanifest and early symptomatic Huntington's disease (pre-HD and symp-HD, respectively. In this investigation we sought to determine the sensitivity of imaging methods to detect macrostructural (volume and microstructural (diffusivity longitudinal change in HD. We used a 3T MRI scanner to acquire T1 and diffusion weighted images at baseline and 18 months in 31 pre-HD, 31 symp-HD and 29 controls. Volume was measured across the whole brain, and volume and diffusion measures were ascertained for caudate and putamen. We observed a range of significant volumetric and, for the first time, diffusion changes over 18 months in both pre-HD and symp-HD, relative to controls, detectable at the brain-wide level (volume change in grey and white matter and in caudate and putamen (volume and diffusivity change. Importantly, longitudinal volume change in the caudate was the only measure that discriminated between groups across all stages of disease: far from diagnosis (>15 years, close to diagnosis (<15 years and after diagnosis. Of the two diffusion metrics (mean diffusivity, MD; fractional anisotropy, FA, only longitudinal FA change was sensitive to group differences, but only after diagnosis. These findings further confirm caudate atrophy as one of the most sensitive and early biomarkers of neurodegeneration in HD. They also highlight that different tissue properties have varying schedules in their ability to discriminate between groups along disease progression and may therefore inform biomarker selection for future therapeutic interventions.

  17. Preoperative magnetic resonance and intraoperative ultrasound fusion imaging for real-time neuronavigation in brain tumor surgery.

    Science.gov (United States)

    Prada, F; Del Bene, M; Mattei, L; Lodigiani, L; DeBeni, S; Kolev, V; Vetrano, I; Solbiati, L; Sakas, G; DiMeco, F

    2015-04-01

    Brain shift and tissue deformation during surgery for intracranial lesions are the main actual limitations of neuro-navigation (NN), which currently relies mainly on preoperative imaging. Ultrasound (US), being a real-time imaging modality, is becoming progressively more widespread during neurosurgical procedures, but most neurosurgeons, trained on axial computed tomography (CT) and magnetic resonance imaging (MRI) slices, lack specific US training and have difficulties recognizing anatomic structures with the same confidence as in preoperative imaging. Therefore real-time intraoperative fusion imaging (FI) between preoperative imaging and intraoperative ultrasound (ioUS) for virtual navigation (VN) is highly desirable. We describe our procedure for real-time navigation during surgery for different cerebral lesions. We performed fusion imaging with virtual navigation for patients undergoing surgery for brain lesion removal using an ultrasound-based real-time neuro-navigation system that fuses intraoperative cerebral ultrasound with preoperative MRI and simultaneously displays an MRI slice coplanar to an ioUS image. 58 patients underwent surgery at our institution for intracranial lesion removal with image guidance using a US system equipped with fusion imaging for neuro-navigation. In all cases the initial (external) registration error obtained by the corresponding anatomical landmark procedure was below 2 mm and the craniotomy was correctly placed. The transdural window gave satisfactory US image quality and the lesion was always detectable and measurable on both axes. Brain shift/deformation correction has been successfully employed in 42 cases to restore the co-registration during surgery. The accuracy of ioUS/MRI fusion/overlapping was confirmed intraoperatively under direct visualization of anatomic landmarks and the error was surgery and is less expensive and time-consuming than other intraoperative imaging techniques, offering high precision and

  18. Musculoskeletal ultrasound and other imaging modalities in rheumatoid arthritis.

    Science.gov (United States)

    Ohrndorf, Sarah; Werner, Stephanie G; Finzel, Stephanie; Backhaus, Marina

    2013-05-01

    This review refers to the use of musculoskeletal ultrasound in patients with rheumatoid arthritis (RA) both in clinical practice and research. Furthermore, other novel sensitive imaging modalities (high resolution peripheral quantitative computed tomography and fluorescence optical imaging) are introduced in this article. Recently published ultrasound studies presented power Doppler activity by ultrasound highly predictive for later radiographic erosions in patients with RA. Another study presented synovitis detected by ultrasound being predictive of subsequent structural radiographic destruction irrespective of the ultrasound modality (grayscale ultrasound/power Doppler ultrasound). Further studies are currently under way which prove ultrasound findings as imaging biomarkers in the destructive process of RA. Other introduced novel imaging modalities are in the validation process to prove their impact and significance in inflammatory joint diseases. The introduced imaging modalities show different sensitivities and specificities as well as strength and weakness belonging to the assessment of inflammation, differentiation of the involved structures and radiological progression. The review tries to give an answer regarding how to best integrate them into daily clinical practice with the aim to improve the diagnostic algorithms, the daily patient care and, furthermore, the disease's outcome.

  19. Autonomy of image and use of single or multiple sense modalities in original verbal image production.

    Science.gov (United States)

    Khatena, J

    1978-06-01

    The use of a single or of multiple sense modalities in the production of original verbal images as related to autonomy of imagery was explored. 72 college adults were administered Onomatopoeia and Images and the Gordon Test of Visual Imagery Control. A modified scoring procedure for the Gordon scale differentiated imagers who were moderate or low in autonomy. The two groups produced original verbal images using multiple sense modalities more frequently than a single modality.

  20. Multi-modal trip planning system : Northeastern Illinois Regional Transportation Authority.

    Science.gov (United States)

    2013-01-01

    This report evaluates the Multi-Modal Trip Planner System (MMTPS) implemented by the Northeastern Illinois Regional Transportation Authority (RTA) against the specific functional objectives enumerated by the Federal Transit Administration (FTA) in it...

  1. Unmanned Aerial System (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine

    Science.gov (United States)

    Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix

    2017-12-01

    Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was

  2. Contribution to the detection of changes in multi-modal 3D MRI sequences

    International Nuclear Information System (INIS)

    Bosc, Marcel

    2003-01-01

    This research thesis reports the study of automatic techniques for the detection of changes in image sequences of brain magnetic resonance imagery (MRI), and more particularly the study of localised intensity changes occurring during pathological evolutions such as evolutions of lesions into multiple sclerosis. Thus, this work focused on the development of image processing tools allowing to decide whether changes are statistically significant or not. The author developed automatic techniques of identification and correction of the main artefacts (position, deformations, intensity variation, and so on), and proposes an original technique for cortex segmentation which introduced anatomic information for an improved automatic detection. The developed change detection system has been assessed within the frame of the study of the evolution of lesions of multiple sclerosis. Performance have been determined on a large number of multi-modal images, and the automatic system has shown better performance than a human expert [fr

  3. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  4. Hierarchical programming language for modal multi-rate real-time stream processing applications

    NARCIS (Netherlands)

    Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit

    2014-01-01

    Modal multi-rate stream processing applications with real-time constraints which are executed on multi-core embedded systems often cannot be conveniently specified using current programming languages. An important issue is that sequential programming languages do not allow for convenient programming

  5. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....

  6. Visualization of multi-INT fusion data using Java Viewer (JVIEW)

    Science.gov (United States)

    Blasch, Erik; Aved, Alex; Nagy, James; Scott, Stephen

    2014-05-01

    Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e., hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big data solution for multi-intelligence fusion application for context-enhanced information fusion.

  7. Quality assurance of CT-PET alignment and image registration for radiation treatment planning

    International Nuclear Information System (INIS)

    Gong, S.J.; O'Keefe, G.J.; Gunawardana, D.H.

    2005-01-01

    A multi-layer point source phantom was first used to calibrate and verify the CT-PET system alignment. A partial whole-body Aldcrson RANDO Man Phantom (head through mid-femur) was externally and internally marked with small metal cannulas filled with 18F-FDG and then scanned with both modalities. Six series of phantom studies with different acquisition settings and scan positions were performed to reveal possible system bias and evaluate the accuracy and reliabilities of Philips Syntegra program in image alignment, coregistration and fusion. The registration error was assessed quantitatively by measuring the root-mean-square distance between the iso-centers of corresponding fiducial marker geometries in reference CT volumes and transformed CT or PET volumes. Results: Experimental data confirms the accuracy of manual, parameter, point and image-based registration using Syntegra is better than 2 mm. Comparisons between blind and cross definition of iso-centers of fiducial marks indicate that the fused CT and PET is superior to visual correlation of CT and PET side-by-side. Conclusion: In this work we demonstrate the QA procedures of Gemini image alignment and registration. Syntegra produces intrinsic and robust multi-modality image registration and fusion with careful user interaction. The registration accuracy is generally better than the spatial resolution of the PET scanner used and this appears to be sufficient for most RTP CT-PET registration procedures

  8. Multi-exposure high dynamic range image synthesis with camera shake correction

    Science.gov (United States)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  9. Assessment of fusion operators for medical imaging: application to MR images fusion

    International Nuclear Information System (INIS)

    Barra, V.; Boire, J.Y.

    2000-01-01

    We propose in the article to assess the results provided by several fusion operators in the case of T 1 - and T 2 -weighted magnetic resonance images fusion of the brain. This assessment deals with an expert visual inspection of the results and with a numerical analysis of some comparison measures found in the literature. The aim of this assessment is to find the 'best' operator according to the clinical study. This method is here applied to the quantification of brain tissue volumes on a brain phantom, and allows to select a fusion operator in any clinical study where several information is available. (authors)

  10. Multi-Modal Traveler Information System - GCM Corridor Architecture Functional Requirements

    Science.gov (United States)

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  11. Simulation for evaluation of the multi-ion-irradiation Laboratory of TechnoFusion facility and its relevance for fusion applications

    International Nuclear Information System (INIS)

    Jimenez-Rey, D.; Mota, F.; Vila, R.; Ibarra, A.; Ortiz, Christophe J.; Martinez-Albertos, J.L.; Roman, R.; Gonzalez, M.; Garcia-Cortes, I.; Perlado, J.M.

    2011-01-01

    Thermonuclear fusion requires the development of several research facilities, in addition to ITER, needed to advance the technologies for future fusion reactors. TechnoFusion will focus in some of the priority areas identified by international fusion programmes. Specifically, the TechnoFusion Area of Irradiation of Materials aims at surrogating experimentally the effects of neutron irradiation on materials using a combination of ion beams. This paper justifies this approach using computer simulations to validate the multi-ion-irradiation Laboratory. The planned irradiation facility will investigate the effects of high energetic radiations on reactor-relevant materials. In a second stage, it will also be used to analyze the performance of such materials and evaluate newly designed materials. The multi-ion-irradiation Laboratory, both triple irradiation and high-energy proton irradiation, can provide valid experimental techniques to reproduce the effect of neutron damage in fusion environment.

  12. A Selective Review of Multimodal Fusion Methods in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jing eSui

    2012-02-01

    Full Text Available Schizophrenia (SZ is one of the most cryptic and costly mental disorders in terms of human suffering and societal expenditure (van Os and Kapur, 2009. Though strong evidences for functional, structural and genetic abnormalities associated with this disease exist, there is yet no replicable finding which has proven accurate enough to be useful in clinical decision making (Fornito et al., 2009, and its diagnosis relies primarily upon symptom assessment (Williams et al., 2010a. It is likely in part that the lack of consistent neuroimaging findings is because most models favor only one data type or do not combine data from different imaging modalities effectively, thus missing potentially important differences which are only partially detected by each modality (Calhoun et al., 2006a. It is becoming increasingly clear that multi-modal fusion, a technique which takes advantage of the fact that each modality provides a limited view of the brain/gene and may uncover hidden relationships, is an important tool to help unravel the black box of schizophrenia. In this review paper, we survey a number of multimodal fusion applications which enable us to study the schizophrenia macro-connectome, including brain functional, structural and genetic aspects and may help us understand the disorder in a more comprehensive and integrated manner. We also provide a table that characterizes these applications by the methods used and compare these methods in detail, especially for multivariate models, which may serve as a valuable reference that helps readers select an appropriate method based on a given research.

  13. HALO: a reconfigurable image enhancement and multisensor fusion system

    Science.gov (United States)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  14. An investigation of face and fingerprint feature-fusion guidelines

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available There are a lack of multi-modal biometric fusion guidelines at the feature-level. This paper investigates face and fingerprint features in the form of their strengths and weaknesses. This serves as a set of guidelines to authors that are planning...

  15. Tensor-based fusion of EEG and FMRI to understand neurological changes in Schizophrenia

    DEFF Research Database (Denmark)

    Evrim, Acar Ataman; Levin-Schwartz, Yuri; Calhoun, Vince D.

    2016-01-01

    Neuroimaging modalities such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) provide information about neurological functions in complementary spatiotemporal resolutions; therefore, fusion of these modalities is expected to provide better understanding of brain...

  16. Cloud-based processing of multi-spectral imaging data

    Science.gov (United States)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  17. Multi-Modal Intelligent Traffic Signal Systems Signal Plans for Roadside Equipment

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  18. Multi-Modal Intelligent Traffic Signal Systems Vehicle Trajectories for Roadside Equipment

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  19. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  20. Outcome of transarterial chemoembolization-based multi-modal treatment in patients with unresectable hepatocellular carcinoma.

    Science.gov (United States)

    Song, Do Seon; Nam, Soon Woo; Bae, Si Hyun; Kim, Jin Dong; Jang, Jeong Won; Song, Myeong Jun; Lee, Sung Won; Kim, Hee Yeon; Lee, Young Joon; Chun, Ho Jong; You, Young Kyoung; Choi, Jong Young; Yoon, Seung Kew

    2015-02-28

    To investigate the efficacy and safety of transarterial chemoembolization (TACE)-based multimodal treatment in patients with large hepatocellular carcinoma (HCC). A total of 146 consecutive patients were included in the analysis, and their medical records and radiological data were reviewed retrospectively. In total, 119 patients received TACE-based multi-modal treatments, and the remaining 27 received conservative management. Overall survival (P<0.001) and objective tumor response (P=0.003) were significantly better in the treatment group than in the conservative group. After subgroup analysis, survival benefits were observed not only in the multi-modal treatment group compared with the TACE-only group (P=0.002) but also in the surgical treatment group compared with the loco-regional treatment-only group (P<0.001). Multivariate analysis identified tumor stage (P<0.001) and tumor type (P=0.009) as two independent pre-treatment factors for survival. After adjusting for significant pre-treatment prognostic factors, objective response (P<0.001), surgical treatment (P=0.009), and multi-modal treatment (P=0.002) were identified as independent post-treatment prognostic factors. TACE-based multi-modal treatments were safe and more beneficial than conservative management. Salvage surgery after successful downstaging resulted in long-term survival in patients with large, unresectable HCC.

  1. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    Science.gov (United States)

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  2. Multi-modal Virtual Scenario Enhances Neurofeedback Learning

    Directory of Open Access Journals (Sweden)

    Avihay Cohen

    2016-08-01

    Full Text Available In the past decade neurofeedback has become the focus of a growing body of research. With real-time fMRI enabling on-line monitoring of emotion related areas such as the amygdala, many have begun testing its therapeutic benefits. However most existing neurofeedback procedures still use monotonic uni-modal interfaces, thus possibly limiting user engagement and weakening learning efficiency. The current study tested a novel multi-sensory neurofeedback animated scenario aimed at enhancing user experience and improving learning. We examined whether relative to a simple uni-modal 2D interface, learning via an interface of complex multi-modal 3D scenario will result in improved neurofeedback learning. As a neural-probe, we used the recently developed fMRI-inspired EEG model of amygdala activity (amygdala-EEG finger print; amygdala-EFP, enabling low-cost and mobile limbic neurofeedback training. Amygdala-EFP was reflected in the animated scenario by the unrest level of a hospital waiting-room in which virtual characters become impatient, approach the admission-desk and complain loudly. Successful down-regulation was reflected as an ease in the room unrest-level. We tested whether relative to a standard uni-modal 2D graphic thermometer interface, this animated scenario could facilitate more effective learning and improve the training experience. Thirty participants underwent two separated neurofeedback sessions (one-week apart practicing down-regulation of the amygdala-EFP signal. In the first session, half trained via the animated scenario and half via a thermometer interface. Learning efficiency was tested by three parameters: (a effect-size of the change in amygdala-EFP following training, (b sustainability of the learned down-regulation in the absence of online feedback, and (c transferability to an unfamiliar context. Comparing amygdala-EFP signal amplitude between the last and the first neurofeedback trials revealed that the animated scenario

  3. CT-MR image data fusion for computer assisted navigated neurosurgery of temporal bone tumors

    International Nuclear Information System (INIS)

    Nemec, Stefan Franz; Donat, Markus Alexander; Mehrain, Sheida; Friedrich, Klaus; Krestan, Christian; Matula, Christian; Imhof, Herwig; Czerny, Christian

    2007-01-01

    Purpose: To demonstrate the value of multi detector computed tomography (MDCT) and magnetic resonance imaging (MRI) in the preoperative work up of temporal bone tumors and to present, especially, CT and MR image fusion for surgical planning and performance in computer assisted navigated neurosurgery of temporal bone tumors. Materials and methods: Fifteen patients with temporal bone tumors underwent MDCT and MRI. MDCT was performed in high-resolution bone window level setting in axial plane. The reconstructed MDCT slice thickness was 0.8 mm. MRI was performed in axial and coronal plane with T2-weighted fast spin-echo (FSE) sequences, un-enhanced and contrast-enhanced T1-weighted spin-echo (SE) sequences, and coronal T1-weighted SE sequences with fat suppression and with 3D T1-weighted gradient-echo (GE) contrast-enhanced sequences in axial plane. The 3D T1-weighted GE sequence had a slice thickness of 1 mm. Image data sets of CT and 3D T1-weighted GE sequences were merged utilizing a workstation to create CT-MR fusion images. MDCT and MR images were separately used to depict and characterize lesions. The fusion images were utilized for interventional planning and intraoperative image guidance. The intraoperative accuracy of the navigation unit was measured, defined as the deviation between the same landmark in the navigation image and the patient. Results: Tumorous lesions of bone and soft tissue were well delineated and characterized by CT and MR images. The images played a crucial role in the differentiation of benign and malignant pathologies, which consisted of 13 benign and 2 malignant tumors. The CT-MR fusion images supported the surgeon in preoperative planning and improved surgical performance. The mean intraoperative accuracy of the navigation system was 1.25 mm. Conclusion: CT and MRI are essential in the preoperative work up of temporal bone tumors. CT-MR image data fusion presents an accurate tool for planning the correct surgical procedure and is a

  4. Image fusion via nonlocal sparse K-SVD dictionary learning.

    Science.gov (United States)

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  5. Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Science.gov (United States)

    Fan, Lei

    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc

  6. On combining multi-normalization and ancillary measures for the optimal score level fusion of fingerprint and voice biometrics

    Science.gov (United States)

    Mohammed Anzar, Sharafudeen Thaha; Sathidevi, Puthumangalathu Savithri

    2014-12-01

    In this paper, we have considered the utility of multi-normalization and ancillary measures, for the optimal score level fusion of fingerprint and voice biometrics. An efficient matching score preprocessing technique based on multi-normalization is employed for improving the performance of the multimodal system, under various noise conditions. Ancillary measures derived from the feature space and the score space are used in addition to the matching score vectors, for weighing the modalities, based on their relative degradation. Reliability (dispersion) and the separability (inter-/intra-class distance and d-prime statistics) measures under various noise conditions are estimated from the individual modalities, during the training/validation stage. The `best integration weights' are then computed by algebraically combining these measures using the weighted sum rule. The computed integration weights are then optimized against the recognition accuracy using techniques such as grid search, genetic algorithm and particle swarm optimization. The experimental results show that, the proposed biometric solution leads to considerable improvement in the recognition performance even under low signal-to-noise ratio (SNR) conditions and reduces the false acceptance rate (FAR) and false rejection rate (FRR), making the system useful for security as well as forensic applications.

  7. Is Synthesizing MRI Contrast Useful for Inter-modality Analysis?

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Konukoglu, Ender; Zikic, Darko

    2013-01-01

    Availability of multi-modal magnetic resonance imaging (MRI) databases opens up the opportunity to synthesize different MRI contrasts without actually acquiring the images. In theory such synthetic images have the potential to reduce the amount of acquisitions to perform certain analyses. However...

  8. Multi-modal Behavioural Biometric Authentication for Mobile Devices

    OpenAIRE

    Saevanee , Hataichanok; Clarke , Nathan ,; Furnell , Steven ,

    2012-01-01

    Part 12: Authentication and Delegation; International audience; The potential advantages of behavioural biometrics are that they can be utilised in a transparent (non-intrusive) and continuous authentication system. However, individual biometric techniques are not suited to all users and scenarios. One way to increase the reliability of transparent and continuous authentication systems is create a multi-modal behavioural biometric authentication system. This research investigated three behavi...

  9. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    Science.gov (United States)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  10. Image and Dose Simulation in Support of New Imaging Modalities

    International Nuclear Information System (INIS)

    Kuruvilla Verghese

    2002-01-01

    This report summarizes the highlights of the research performed under the 2-year NEER grant from the Department of Energy. The primary outcome of the work was a new Monte Carlo code, MCMIS-DS, for Monte Carlo for Mammography Image Simulation including Differential Sampling. The code was written to generate simulated images and dose distributions from two different new digital x-ray imaging modalities, namely, synchrotron imaging (SI) and a slot geometry digital mammography system called Fisher Senoscan. A differential sampling scheme was added to the code to generate multiple images that included variations in the parameters of the measurement system and the object in a single execution of the code. The code is to serve multiple purposes; (1) to answer questions regarding the contribution of scattered photons to images, (2) for use in design optimization studies, and (3) to do up to second-order perturbation studies to assess the effects of design parameter variations and/or physical parameters of the object (the breast) without having to re-run the code for each set of varied parameters. The accuracy and fidelity of the code were validated by a large variety of benchmark studies using published data and also using experimental results from mammography phantoms on both imaging modalities

  11. Quantitative functional optical imaging of the human skin using multi-spectral imaging

    International Nuclear Information System (INIS)

    Kainerstorfer, J. M.

    2010-01-01

    Light tissue interactions can be described by the physical principles of absorption and scattering. Based on those parameters, different tissue types and analytes can be distinguished. Extracting blood volume and oxygenation is of particular interest in clinical routines for tumor diagnostics and treatment follow up, since they are parameters of angiogenic processes. The quantification of those analytes in tissue can be done by physical modeling of light tissue interaction. The physical model used here is the random walk theory. However, for quantification and clinical usefulness, one has to account for multiple challenges. First, one must consider the effect of topology of the sample on measured physical parameters. Second, diffusion of light inside the tissue is dependent on the structure of the sample imaged. Thus, the structural conformation has to be taken into account. Third, clinical translation of imaging modalities is often hindered due to the complicated post-processing of data, not providing results in real-time. In this thesis, two imaging modalities are being utilized, where the first one, diffuse multi-spectral imaging, is based on absorption contrast and spectral characteristics and the second one, Optical Coherence Tomography (OCT), is based on scattering changes within the tissue. Multi-spectral imaging can provide spatial distributions of blood volume and blood oxygenation and OCT yields 3D structural images with micrometer resolution. In order to address the challenges mentioned above, a curvature correction algorithm for taking the topology into account was developed. Without taking curvature of the object into account, reconstruction of optical properties is not accurate. The method developed removes this artifact and recovers the underlying data, without the necessity of measuring the object's shape. The next step was to recover blood volume and oxygenation values in real time. Principal Component Analysis (PCA) on multi spectral images is

  12. Anato-metabolic fusion of PET, CT and MRI images

    International Nuclear Information System (INIS)

    Przetak, C.; Baum, R.P.; Niesen, A.; Slomka, P.; Proeschild, A.; Leonhardi, J.

    2000-01-01

    The fusion of cross-sectional images - especially in oncology - appears to be a very helpful tool to improve the diagnostic and therapeutic accuracy. Though many advantages exist, image fusion is applied routinely only in a few hospitals. To introduce image fusion as a common procedure, technical and logistical conditions have to be fulfilled which are related to long term archiving of digital data, data transfer and improvement of the available software in terms of usefulness and documentation. The accuracy of coregistration and the quality of image fusion has to be validated by further controlled studies. (orig.) [de

  13. Fusion of colour and monochromatic images with edge emphasis

    Directory of Open Access Journals (Sweden)

    Rade M. Pavlović

    2014-02-01

    Full Text Available We propose a novel method to fuse true colour images with monochromatic non-visible range images that seeks to encode important structural information from monochromatic images efficiently but also preserve the natural appearance of the available true chromacity information. We utilise the β colour opponency channel of the lαβ colour as the domain to fuse information from the monochromatic input into the colour input by the way of robust grayscale fusion. This is followed by an effective gradient structure visualisation step that enhances the visibility of monochromatic information in the final colour fused image. Images fused using this method preserve their natural appearance and chromacity better than conventional methods while at the same time clearly encode structural information from the monochormatic input. This is demonstrated on a number of well-known true colour fusion examples and confirmed by the results of subjective trials on the data from several colour fusion scenarios. Introduction The goal of image fusion can be broadly defined as: the representation of visual information contained in a number of input images into a single fused image without distortion or loss of information. In practice, however, a representation of all available information from multiple inputs in a single image is almost impossible and fusion is generally a data reduction task.  One of the sensors usually provides a true colour image that by definition has all of its data dimensions already populated by the spatial and chromatic information. Fusing such images with information from monochromatic inputs in a conventional manner can severely affect natural appearance of the fused image. This is a difficult problem and partly the reason why colour fusion received only a fraction of the attention than better behaved grayscale fusion even long after colour sensors became widespread. Fusion method Humans tend to see colours as contrasts between opponent

  14. Dynamic in vivo imaging and cell tracking using a histone fluorescent protein fusion in mice

    Directory of Open Access Journals (Sweden)

    Papaioannou Virginia E

    2004-12-01

    Full Text Available Abstract Background Advances in optical imaging modalities and the continued evolution of genetically-encoded fluorescent proteins are coming together to facilitate the study of cell behavior at high resolution in living organisms. As a result, imaging using autofluorescent protein reporters is gaining popularity in mouse transgenic and targeted mutagenesis applications. Results We have used embryonic stem cell-mediated transgenesis to label cells at sub-cellular resolution in vivo, and to evaluate fusion of a human histone protein to green fluorescent protein for ubiquitous fluorescent labeling of nucleosomes in mice. To this end we have generated embryonic stem cells and a corresponding strain of mice that is viable and fertile and exhibits widespread chromatin-localized reporter expression. High levels of transgene expression are maintained in a constitutive manner. Viability and fertility of homozygous transgenic animals demonstrates that this reporter is developmentally neutral and does not interfere with mitosis or meiosis. Conclusions Using various optical imaging modalities including wide-field, spinning disc confocal, and laser scanning confocal and multiphoton excitation microscopy, we can identify cells in various stages of the cell cycle. We can identify cells in interphase, cells undergoing mitosis or cell death. We demonstrate that this histone fusion reporter allows the direct visualization of active chromatin in situ. Since this reporter segments three-dimensional space, it permits the visualization of individual cells within a population, and so facilitates tracking cell position over time. It is therefore attractive for use in multidimensional studies of in vivo cell behavior and cell fate.

  15. An effective method for cirrhosis recognition based on multi-feature fusion

    Science.gov (United States)

    Chen, Yameng; Sun, Gengxin; Lei, Yiming; Zhang, Jinpeng

    2018-04-01

    Liver disease is one of the main causes of human healthy problem. Cirrhosis, of course, is the critical phase during the development of liver lesion, especially the hepatoma. Many clinical cases are still influenced by the subjectivity of physicians in some degree, and some objective factors such as illumination, scale, edge blurring will affect the judgment of clinicians. Then the subjectivity will affect the accuracy of diagnosis and the treatment of patients. In order to solve the difficulty above and improve the recognition rate of liver cirrhosis, we propose a method of multi-feature fusion to obtain more robust representations of texture in ultrasound liver images, the texture features we extract include local binary pattern(LBP), gray level co-occurrence matrix(GLCM) and histogram of oriented gradient(HOG). In this paper, we firstly make a fusion of multi-feature to recognize cirrhosis and normal liver based on parallel combination concept, and the experimental results shows that the classifier is effective for cirrhosis recognition which is evaluated by the satisfying classification rate, sensitivity and specificity of receiver operating characteristic(ROC), and cost time. Through the method we proposed, it will be helpful to improve the accuracy of diagnosis of cirrhosis and prevent the development of liver lesion towards hepatoma.

  16. Multi-modal Social Networks: A MRF Learning Approach

    Science.gov (United States)

    2016-06-20

    Network forensics: random infection vs spreading epidemic , Proceedings of ACM Sigmetrics. 11-JUN-12, London, UK. : , TOTAL: 4 06/09/2016 Received Paper...Multi-modal Social Networks A MRF Learning Approach The work primarily focused on two lines of research. 1. We propose new greedy algorithms...Box 12211 Research Triangle Park, NC 27709-2211 social networks , learning and inference REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT

  17. Advances in Multi-Sensor Information Fusion: Theory and Applications 2017.

    Science.gov (United States)

    Jin, Xue-Bo; Sun, Shuli; Wei, Hong; Yang, Feng-Bao

    2018-04-11

    The information fusion technique can integrate a large amount of data and knowledge representing the same real-world object and obtain a consistent, accurate, and useful representation of that object. The data may be independent or redundant, and can be obtained by different sensors at the same time or at different times. A suitable combination of investigative methods can substantially increase the profit of information in comparison with that from a single sensor. Multi-sensor information fusion has been a key issue in sensor research since the 1970s, and it has been applied in many fields. For example, manufacturing and process control industries can generate a lot of data, which have real, actionable business value. The fusion of these data can greatly improve productivity through digitization. The goal of this special issue is to report innovative ideas and solutions for multi-sensor information fusion in the emerging applications era, focusing on development, adoption, and applications.

  18. Advances in Multi-Sensor Information Fusion: Theory and Applications 2017

    Directory of Open Access Journals (Sweden)

    Xue-Bo Jin

    2018-04-01

    Full Text Available The information fusion technique can integrate a large amount of data and knowledge representing the same real-world object and obtain a consistent, accurate, and useful representation of that object. The data may be independent or redundant, and can be obtained by different sensors at the same time or at different times. A suitable combination of investigative methods can substantially increase the profit of information in comparison with that from a single sensor. Multi-sensor information fusion has been a key issue in sensor research since the 1970s, and it has been applied in many fields. For example, manufacturing and process control industries can generate a lot of data, which have real, actionable business value. The fusion of these data can greatly improve productivity through digitization. The goal of this special issue is to report innovative ideas and solutions for multi-sensor information fusion in the emerging applications era, focusing on development, adoption, and applications.

  19. Multi-modal magnetic resonance imaging and histology of vascular function in xenografts using macromolecular contrast agent hyperbranched polyglycerol (HPG-GdF).

    Science.gov (United States)

    Baker, Jennifer H E; McPhee, Kelly C; Moosvi, Firas; Saatchi, Katayoun; Häfeli, Urs O; Minchinton, Andrew I; Reinsberg, Stefan A

    2016-01-01

    Macromolecular gadolinium (Gd)-based contrast agents are in development as blood pool markers for MRI. HPG-GdF is a 583 kDa hyperbranched polyglycerol doubly tagged with Gd and Alexa 647 nm dye, making it both MR and histologically visible. In this study we examined the location of HPG-GdF in whole-tumor xenograft sections matched to in vivo DCE-MR images of both HPG-GdF and Gadovist. Despite its large size, we have shown that HPG-GdF extravasates from some tumor vessels and accumulates over time, but does not distribute beyond a few cell diameters from vessels. Fractional plasma volume (fPV) and apparent permeability-surface area product (aPS) parameters were derived from the MR concentration-time curves of HPG-GdF. Non-viable necrotic tumor tissue was excluded from the analysis by applying a novel bolus arrival time (BAT) algorithm to all voxels. aPS derived from HPG-GdF was the only MR parameter to identify a difference in vascular function between HCT116 and HT29 colorectal tumors. This study is the first to relate low and high molecular weight contrast agents with matched whole-tumor histological sections. These detailed comparisons identified tumor regions that appear distinct from each other using the HPG-GdF biomarkers related to perfusion and vessel leakiness, while Gadovist-imaged parameter measures in the same regions were unable to detect variation in vascular function. We have established HPG-GdF as a biocompatible multi-modal high molecular weight contrast agent with application for examining vascular function in both MR and histological modalities. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Imaging Breast Density: Established and Emerging Modalities

    Directory of Open Access Journals (Sweden)

    Jeon-Hor Chen

    2015-12-01

    Full Text Available Mammographic density has been proven as an independent risk factor for breast cancer. Women with dense breast tissue visible on a mammogram have a much higher cancer risk than women with little density. A great research effort has been devoted to incorporate breast density into risk prediction models to better estimate each individual’s cancer risk. In recent years, the passage of breast density notification legislation in many states in USA requires that every mammography report should provide information regarding the patient’s breast density. Accurate definition and measurement of breast density are thus important, which may allow all the potential clinical applications of breast density to be implemented. Because the two-dimensional mammography-based measurement is subject to tissue overlapping and thus not able to provide volumetric information, there is an urgent need to develop reliable quantitative measurements of breast density. Various new imaging technologies are being developed. Among these new modalities, volumetric mammographic density methods and three-dimensional magnetic resonance imaging are the most well studied. Besides, emerging modalities, including different x-ray–based, optical imaging, and ultrasound-based methods, have also been investigated. All these modalities may either overcome some fundamental problems related to mammographic density or provide additional density and/or compositional information. The present review article aimed to summarize the current established and emerging imaging techniques for the measurement of breast density and the evidence of the clinical use of these density methods from the literature.

  1. Application of multi-modality image coregistration in paediatric prosthetic endocarditis

    International Nuclear Information System (INIS)

    Kitsos, T.; Chung, D.K.; Howman-Giles, R.; Lau, Y.H.; University of Technology, Sydney, NSW

    2003-01-01

    This is a case where coregistration of Gallium and chest CT scans provided important clinical information which had a significant impact on management decisions. A 13-year-old girl from Noumea was transferred to our hospital for further management of S.haemolyticus bacteraemia. She had a history of complex congenital heart disease, requiring several cardiac surgical procedures. Seven months earlier she had a patch repair of the ventricular septum and formation of a right ventricle to pulmonary artery conduit. On admission she was generally unwell and had hemoptysis. She had fever to 39.9 deg C, oximetry of 83 per cent on room air, finger clubbing and a cardiac murmur. Chest X-ray and CT scans showed widespread pulmonary consolidation with bilateral pleural effusions. An echocardiogram showed no evidence of endocarditis. The crucial diagnostic dilemma was whether she had pneumonia or prosthetic endocarditis: the latter was more ominous and entailed high risk surgery. A Gallium whole body scan with chest SPECT showed focal localisation in the right mid chest. However, its location could not be confidently defined. A specifically developed computer program was used to co-register the Gallium SPECT and chest CT scans. The co-registered images conclusively localised the Gallium scan lesion to the prosthetic pulmonary outflow conduit, consistent with endocarditis. This triggered referral for cardiac MRI, which confirmed the diagnosis. In summary co-registration allowed precise structural localisation of focal Gallium uptake in the prosthetic pulmonary outflow conduit, with profound impact on the patient's diagnosis and subsequent management. This is an example of the potential benefits of co-registering structural and functional imaging modalities. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  2. Biometric image enhancement using decision rule based image fusion techniques

    Science.gov (United States)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  3. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  4. Image fusion analysis of 99mTc-HYNIC-Tyr3-octreotide SPECT and diagnostic CT using an immobilisation device with external markers in patients with endocrine tumours

    International Nuclear Information System (INIS)

    Gabriel, Michael; Hausler, Florian; Moncayo, Roy; Decristoforo, Clemens; Virgolini, Irene; Bale, Reto; Kovacs, Peter

    2005-01-01

    The aim of this study was to assess the value of multimodality imaging using a novel repositioning device with external markers for fusion of single-photon emission computed tomography (SPECT) and computed tomography (CT) images. The additional benefit derived from this methodological approach was analysed in comparison with SPECT and diagnostic CT alone in terms of detection rate, reliability and anatomical assignment of abnormal findings with SPECT. Fifty-three patients (30 males, 23 females) with known or suspected endocrine tumours were studied. Clinical indications for somatostatin receptor (SSTR) scintigraphy (SPECT/CT image fusion) included staging of newly diagnosed tumours (n=14) and detection of unknown primary tumour in the presence of clinical and/or biochemical suspicion of neuroendocrine malignancy (n=20). Follow-up studies after therapy were performed in 19 patients. A mean activity of 400 MBq of 99m Tc-EDDA/HYNIC-Tyr 3 -octreotide was given intravenously. SPECT using a dual-detector scintillation camera and diagnostic multi-detector CT were sequentially performed. To ensure reproducible positioning, patients were fixed in an individualised vacuum mattress with modality-specific external markers for co-registration. SPECT and CT data were initially interpreted separately and the fused images were interpreted jointly in consensus by nuclear medicine and diagnostic radiology physicians. SPECT was true-positive (TP) in 18 patients, true-negative (TN) in 16, false-negative (FN) in ten and false-positive (FP) in nine; CT was TP in 18 patients, TN in 21, FP in ten and FN in four. With image fusion (SPECT and CT), the scan result was TP in 27 patients (50.9%), TN in 25 patients (47.2%) and FN in one patient, this FN result being caused by multiple small liver metastases; sensitivity was 95% and specificity, 100%. The difference between SPECT and SPECT/CT was statistically as significant as the difference between CT and SPECT/CT image fusion (P<0

  5. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    Science.gov (United States)

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  6. Multi-Modal Traveler Information System - GCM Corridor Architecture Interface Control Requirements

    Science.gov (United States)

    1997-10-31

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  7. Strategic Mobility 21, Inland Port - Multi-Modal Terminal Operating System Design Specification

    National Research Council Canada - National Science Library

    Mallon, Lawrence G; Dougherty, Edmond J

    2007-01-01

    ...) Specification identifies technical and functional requirements for procuring and integrating services required for a multi-modal node operating software system operating within a Service Oriented Architecture (SOA...

  8. A multi-biometric feature-fusion framework for improved uni-modal and multi-modal human identification

    CSIR Research Space (South Africa)

    Brown, K

    2016-05-01

    Full Text Available after basic pre-processing, consist- ing of image alignment, pixel normalization and histogram equalization. The following results relate to the various face and fingerprint datasets only. Local Binary Pattern Histogram (LBPH) proved to be a versatile... descriptor by increasing the radius in correlation to the interpolated neighbours. The modified ELBP operator significantly outperformed the histogram equalization and pixel normalization under dynamic lighting conditions. This was used before the LOG filter...

  9. Pulmonary function-morphologic relationships assessed by SPECT-CT fusion images

    International Nuclear Information System (INIS)

    Suga, Kazuyoshi

    2012-01-01

    Pulmonary single photon emission computed tomography-computed tomography (SPECT-CT) fusion images provide objective and comprehensive assessment of pulmonary function and morphology relationships at cross-sectional lungs. This article reviewed the noteworthy findings of lung pathophysiology in wide-spectral lung disorders, which have been revealed on SPECT-CT fusion images in 8 years of experience. The fusion images confirmed the fundamental pathophysiologic appearance of lung low CT attenuation caused by airway obstruction-induced hypoxic vasoconstriction and that caused by direct pulmonary arterial obstruction as in acute pulmonary thromboembolism (PTE). The fusion images showed better correlation of lung perfusion distribution with lung CT attenuation changes at lung mosaic CT attenuation (MCA) compared with regional ventilation in the wide-spectral lung disorders, indicating that lung heterogeneous perfusion distribution may be a dominant mechanism of MCA on CT. SPECT-CT angiography fusion images revealed occasional dissociation between lung perfusion defects and intravascular clots in acute PTE, indicating the importance of assessment of actual effect of intravascular colts on peripheral lung perfusion. Perfusion SPECT-CT fusion images revealed the characteristic and preferential location of pulmonary infarction in acute PTE. The fusion images showed occasional unexpected perfusion defects in normal lung areas on CT in chronic obstructive pulmonary diseases and interstitial lung diseases, indicating the ability of perfusion SPECT superior to CT for detection of mild lesions in these disorders. The fusion images showed frequent ''steal phenomenon''-induced perfusion defects extending to the surrounding normal lung of arteriovenous fistulas and those at normal lungs on CT in hepatopulmonary syndrome. Comprehensive assessment of lung function-CT morphology on fusion images will lead to more profound understanding of lung pathophysiology in wide-spectral lung

  10. F-18 Labeled Diabody-Luciferase Fusion Proteins for Optical-ImmunoPET

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Anna M. [Univ. of California, Los Angeles, CA (United States)

    2013-01-18

    The goal of the proposed work is to develop novel dual-labeled molecular imaging probes for multimodality imaging. Based on small, engineered antibodies called diabodies, these probes will be radioactively tagged with Fluorine-18 for PET imaging, and fused to luciferases for optical (bioluminescence) detection. Performance will be evaluated and validated using a prototype integrated optical-PET imaging system, OPET. Multimodality probes for optical-PET imaging will be based on diabodies that are dually labeled with 18F for PET detection and fused to luciferases for optical imaging. 1) Two sets of fusion proteins will be built, targeting the cell surface markers CEA or HER2. Coelenterazine-based luciferases and variant forms will be evaluated in combination with native substrate and analogs, in order to obtain two distinct probes recognizing different targets with different spectral signatures. 2) Diabody-luciferase fusion proteins will be labeled with 18F using amine reactive [18F]-SFB produced using a novel microwave-assisted, one-pot method. 3) Sitespecific, chemoselective radiolabeling methods will be devised, to reduce the chance that radiolabeling will inactivate either the target-binding properties or the bioluminescence properties of the diabody-luciferase fusion proteins. 4) Combined optical and PET imaging of these dual modality probes will be evaluated and validated in vitro and in vivo using a prototype integrated optical-PET imaging system, OPET. Each imaging modality has its strengths and weaknesses. Development and use of dual modality probes allows optical imaging to benefit from the localization and quantitation offered by the PET mode, and enhances the PET imaging by enabling simultaneous detection of more than one probe.

  11. Radar image and data fusion for natural hazards characterisation

    Science.gov (United States)

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong

    2010-01-01

    Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.

  12. Spatial Aspects of Multi-Sensor Data Fusion: Aerosol Optical Thickness

    Science.gov (United States)

    Leptoukh, Gregory; Zubko, V.; Gopalan, A.

    2007-01-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) investigated the applicability and limitations of combining multi-sensor data through data fusion, to increase the usefulness of the multitude of NASA remote sensing data sets, and as part of a larger effort to integrate this capability in the GES-DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni). This initial study focused on merging daily mean Aerosol Optical Thickness (AOT), as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites, to increase spatial coverage and produce complete fields to facilitate comparison with models and station data. The fusion algorithm used the maximum likelihood technique to merge the pixel values where available. The algorithm was applied to two regional AOT subsets (with mostly regular and irregular gaps, respectively) and a set of AOT fields that differed only in the size and location of artificially created gaps. The Cumulative Semivariogram (CSV) was found to be sensitive to the spatial distribution of gap areas and, thus, useful for assessing the sensitivity of the fused data to spatial gaps.

  13. Accuracy evaluation of fusion of CT, MR, and SPECT images using commercially available software packages (SRS PLATO and IFS)

    International Nuclear Information System (INIS)

    Mongioj, Valeria; Brusa, Anna; Loi, Gianfranco; Pignoli, Emanuele; Gramaglia, Alberto; Scorsetti, Marta; Bombardieri, Emilio; Marchesini, Renato

    1999-01-01

    Purpose: A problem for clinicians is to mentally integrate information from multiple diagnostic sources, such as computed tomography (CT), magnetic resonance (MR), and single photon emission computed tomography (SPECT), whose images give anatomic and metabolic information. Methods and Materials: To combine this different imaging procedure information, and to overlay correspondent slices, we used commercially available software packages (SRS PLATO and IFS). The algorithms utilize a fiducial-based coordinate system (or frame) with 3 N-shaped markers, which allows coordinate transformation of a clinical examination data set (9 spots for each transaxial section) to a stereotactic coordinate system. The N-shaped markers were filled with fluids visible in each modality (gadolinium for MR, calcium chloride for CT, and 99m Tc for SPECT). The frame is relocatable, in the different acquisition modalities, by means of a head holder to which a face mask is fixed so as to immobilize the patient. Position errors due to the algorithms were obtained by evaluating the stereotactic coordinates of five sources detectable in each modality. Results: SPECT and MR position errors due to the algorithms were evaluated with respect to CT: Δx was ≤ 0.9 mm for MR and ≤ 1.4 mm for SPECT, Δy was ≤ 1 mm and ≤ 3 mm for MR and SPECT, respectively. Maximal differences in distance between estimated and actual fiducial centers (geometric mismatch) were in the order of the pixel size (0.8 mm for CT, 1.4 mm for MR, and 1.8 mm for SPECT). In an attempt to distinguish necrosis from residual disease, the image fusion protocol was studied in 35 primary or metastatic brain tumor patients. Conclusions: The image fusion technique has a good degree of accuracy as well as the potential to improve the specificity of tissue identification and the precision of the subsequent treatment planning

  14. Three-dimensional imaging of lumbar spinal fusions

    International Nuclear Information System (INIS)

    Chafetz, N.; Hunter, J.C.; Cann, C.E.; Morris, J.M.; Ax, L.; Catterling, K.F.

    1986-01-01

    Using a Cemax 1000 three-dimensional (3D) imaging computer/workstation, the author evaluated 15 patients with lumbar spinal fusions (four with pseudarthrosis). Both axial images with sagittal and coronal reformations and 3D images were obtained. The diagnoses (spinal stenosis and psuedarthrosis) were changed in four patients, confirmed in six patients, and unchanged in five patients with the addition of the 3D images. The ''cut-away'' 3D images proved particularly helpful for evaluation of central and lateral spinal stenosis, whereas the ''external'' 3D images were most useful for evaluation of the integrity of the fusion. Additionally, orthopedic surgeons found 3D images superior for both surgical planning and explaining pathology to patients

  15. Enhanced Visualization of Subtle Outer Retinal Pathology by En Face Optical Coherence Tomography and Correlation with Multi-Modal Imaging.

    Directory of Open Access Journals (Sweden)

    Danuta M Sampson

    Full Text Available To present en face optical coherence tomography (OCT images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities.En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO and microperimetry.Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE pathology due to segmentation error at the level of Bruch's membrane (BM. Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities.Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis.

  16. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  17. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  18. Alternate method for to realize image fusion

    International Nuclear Information System (INIS)

    Vargas, L.; Hernandez, F.; Fernandez, R.

    2005-01-01

    At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)

  19. Imaging Modalities for Cervical Spondylotic Stenosis and Myelopathy

    Directory of Open Access Journals (Sweden)

    C. Green

    2012-01-01

    Full Text Available Cervical spondylosis is a spectrum of pathology presenting as neck pain, radiculopathy, and myelopathy or all in combination. Diagnostic imaging is essential to diagnosis and preoperative planning. We discuss the modalities of imaging in common practice. We examine the use of imaging to differentiate among central, subarticular, and lateral stenosis and in the assessment of myelopathy.

  20. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter

    2012-01-01

    measures of reconstructed sub-bands from the discrete wavelet transformation (DWT) and the wavelet packet transformation (WPT). Based on the extracted features all data segments were classified using a support vector machine (SVM) algorithm as simulated seizure or normal activity. A case study...... of the seizure from the patient showed that the simulated seizures were visually similar to the epileptic one. The multi-modal intelligent seizure acquisition (MISA) system showed high sensitivity, short detection latency and low false detection rate. The results showed superiority of the multi- modal detection...... system compared to the uni-modal one. The presented system has a promising potential for seizure detection based on multi-modal data....

  1. Dual modality CT/PET imaging in lung cancer staging

    International Nuclear Information System (INIS)

    Diaz, Gabriel A.

    2005-01-01

    Purpose: To compare the diagnostic capability of PET-HCT image fusion and helical computed tomography (HCT) for nodal and distant metastases detection in patients with lung cancer. Material and methods: Between February, 2003 and March, 2004 sixty-six consecutive lung cancer patients (45 men and 21 women, mean ages: 63 years old, range: 38 to 96 years old) who underwent HCT and PET-HCT fusion imaging were evaluated retrospectively. All patients had histological confirmation of lung cancer and a definitive diagnosis established on the basis of pathology results and/or clinical follow-up. Results: For global nodal staging (hilar and mediastinal) HCT showed a sensitivity, specificity, positive predictive value and negative predictive value of 72%, 47%, 62% and 58% respectively, versus 94%, 77%, 83% and 92% corresponding to PET-HCT examination. For assessment of advanced nodal stage (N3) PET-HCT showed values of 92%, 100%, 100% and 98% respectively. For detection of distant metastasis, HCT alone had values of 67%, 93%, 84% and 83% respectively versus 100%, 98%, 96% and 100% for the PET-HCT fusion imaging. In 20 (30%) patients under-staged or over-staged on the basis of HCT results, PET-HCT allowed accurate staging. Conclusions: PET-HCT fusion imaging was more effective than HCT alone for nodal and distant metastasis detection and oncology staging. (author)

  2. Experimental modal analysis of fractal-inspired multi-frequency structures for piezoelectric energy converters

    International Nuclear Information System (INIS)

    Castagnetti, D

    2012-01-01

    An important issue in the field of energy harvesting through piezoelectric materials is the design of simple and efficient structures which are multi-frequency in the ambient vibration range. This paper deals with the experimental assessment of four fractal-inspired multi-frequency structures for piezoelectric energy harvesting. These structures, thin plates of square shape, were proposed in a previous work by the author and their modal response numerically analysed. The present work has two aims. First, to assess the modal response of these structures through an experimental investigation. Second, to evaluate, through computational simulation, the performance of a piezoelectric converter relying on one of these fractal-inspired structures. The four fractal-inspired structures are examined in the range between 0 and 100 Hz, with regard to both eigenfrequencies and eigenmodes. In the same frequency range, the modal response and power output of the piezoelectric converter are investigated. (paper)

  3. Molecular imaging needles: dual-modality optical coherence tomography and fluorescence imaging of labeled antibodies deep in tissue

    Science.gov (United States)

    Scolaro, Loretta; Lorenser, Dirk; Madore, Wendy-Julie; Kirk, Rodney W.; Kramer, Anne S.; Yeoh, George C.; Godbout, Nicolas; Sampson, David D.; Boudoux, Caroline; McLaughlin, Robert A.

    2015-01-01

    Molecular imaging using optical techniques provides insight into disease at the cellular level. In this paper, we report on a novel dual-modality probe capable of performing molecular imaging by combining simultaneous three-dimensional optical coherence tomography (OCT) and two-dimensional fluorescence imaging in a hypodermic needle. The probe, referred to as a molecular imaging (MI) needle, may be inserted tens of millimeters into tissue. The MI needle utilizes double-clad fiber to carry both imaging modalities, and is interfaced to a 1310-nm OCT system and a fluorescence imaging subsystem using an asymmetrical double-clad fiber coupler customized to achieve high fluorescence collection efficiency. We present, to the best of our knowledge, the first dual-modality OCT and fluorescence needle probe with sufficient sensitivity to image fluorescently labeled antibodies. Such probes enable high-resolution molecular imaging deep within tissue. PMID:26137379

  4. The pre-image problem for Laplacian Eigenmaps utilizing L 1 regularization with applications to data fusion

    International Nuclear Information System (INIS)

    Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy

    2017-01-01

    As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data. (paper)

  5. The pre-image problem for Laplacian Eigenmaps utilizing L 1 regularization with applications to data fusion

    Science.gov (United States)

    Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy

    2017-07-01

    As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data.

  6. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  7. Multi-Modal Inference in Animacy Perception for Artificial Object

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2011-10-01

    Full Text Available Sometimes we feel animacy for artificial objects and their motion. Animals usually interact with environments through multiple sensory modalities. Here we investigated how the sensory responsiveness of artificial objects to the environment would contribute to animacy judgment for them. In a 90-s trial, observers freely viewed four objects moving in a virtual 3D space. The objects, whose position and motion were determined following Perlin-noise series, kept drifting independently in the space. Visual flashes, auditory bursts, or synchronous flashes and bursts appeared with 1–2 s intervals. The first object abruptly accelerated their motion just after visual flashes, giving an impression of responding to the flash. The second object responded to bursts. The third object responded to synchronous flashes and bursts. The forth object accelerated at a random timing independent of flashes and bursts. The observers rated how strongly they felt animacy for each object. The results showed that the object responding to the auditory bursts was rated as having weaker animacy compared to the other objects. This implies that sensory modality through which an object interacts with the environment may be a factor for animacy perception in the object and may serve as the basis of multi-modal and cross-modal inference of animacy.

  8. Comparisons of three alternative breast modalities in a common phantom imaging experiment

    International Nuclear Information System (INIS)

    Li Dun; Meaney, Paul M.; Tosteson, Tor D.; Jiang Shudong; Kerner, Todd E.; McBride, Troy O.; Pogue, Brian W.; Hartov, Alexander; Paulsen, Keith D.

    2003-01-01

    Four model-based imaging systems are currently being developed for breast cancer detection at Dartmouth College. A potential advantage of multimodality imaging is the prospect of combining information collected from each system to provide a more complete diagnostic tool that covers the full range of the patient and pathology spectra. In this paper it is shown through common phantom experiments on three of these imaging systems that it was possible to correlate different types of image information to potentially improve the reliability of tumor detection. Imaging experiments were conducted with common phantoms which mimic both dielectric and optical properties of the human breast. Cross modality comparison was investigated through a statistical study based on the repeated data sets of reconstructed parameters for each modality. The system standard error between all methods was generally less than 10% and the correlation coefficient across modalities ranged from 0.68 to 0.91. Future work includes the minimization of bias (artifacts) on the periphery of electrical impedance spectroscopy images to improve cross modality correlation and implementation of the multimodality diagnosis for breast cancer detection

  9. AUTOMATIC REGISTRATION OF MULTI-SOURCE DATA USING MUTUAL INFORMATION

    Directory of Open Access Journals (Sweden)

    E. G. Parmehr

    2012-07-01

    Full Text Available Automatic image registration is a basic step in multi-sensor data integration in remote sensing and photogrammetric applications such as data fusion. The effectiveness of Mutual Information (MI as a technique for automated multi-sensor image registration has previously been demonstrated for medical and remote sensing applications. In this paper, a new General Weighted MI (GWMI approach that improves the robustness of MI to local maxima, particularly in the case of registering optical imagery and 3D point clouds, is presented. Two different methods including a Gaussian Mixture Model (GMM and Kernel Density Estimation have been used to define the weight function of joint probability, regardless of the modality of the data being registered. The Expectation Maximizing method is then used to estimate parameters of GMM, and in order to reduce the cost of computation, a multi-resolution strategy has been used. The performance of the proposed GWMI method for the registration of aerial orthotoimagery and LiDAR range and intensity information has been experimentally evaluated and the results obtained are presented.

  10. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    Science.gov (United States)

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  11. Design and experimental study of a multi-modal piezoelectric energy harvester

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, Xing Yu [School of Energy, Power and Mechanical Engineering, North China Electric Power University, Beijing (China); Oyadiji, S. Olutunde [School of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester (United States)

    2017-01-15

    A multi-modal piezoelectric vibration energy harvester is designed in this article. It consists of a cantilevered base beam and some upper and lower layer beams with rigid masses bonded between the beams as spacers. For a four-layer harvester subjected to random base excitations, relocating the mass positions leads to the generation of up to four close resonance frequencies over the frequency range from 10 Hz to 100 Hz with relative large power output. The harvesters are connected with a resistance decade box and the frequency response functions of the voltage and power on resistive loads are determined. The experimental results are validated with the simulation results using the finite element method. On a certain level of power output, the experimental results show that the multi-modal harvesters can generate a frequency band that is more than two times greater than the frequency band produced by a cantilevered beam harvester.

  12. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    Science.gov (United States)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat

  13. A Transgenic Tri-Modality Reporter Mouse

    OpenAIRE

    Yan, Xinrui; Ray, Pritha; Paulmurugan, Ramasamy; Tong, Ricky; Gong, Yongquan; Sathirachinda, Ataya; Wu, Joseph C.; Gambhir, Sanjiv S.

    2013-01-01

    Transgenic mouse with a stably integrated reporter gene(s) can be a valuable resource for obtaining uniformly labeled stem cells, tissues, and organs for various applications. We have generated a transgenic mouse model that ubiquitously expresses a tri-fusion reporter gene (fluc2-tdTomato-ttk) driven by a constitutive chicken β-actin promoter. This "Tri-Modality Reporter Mouse" system allows one to isolate most cells from this donor mouse and image them for bioluminescent (fluc2), fluorescent...

  14. Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.

    Science.gov (United States)

    Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2016-08-01

    Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  15. Spectrally Consistent Satellite Image Fusion with Improved Image Priors

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.

    2006-01-01

    Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....

  16. Radiological Evaluation of Ambiguous Genitalia with Various Imaging Modalities

    Science.gov (United States)

    Ravi, N.; Bindushree, Kadakola

    2012-07-01

    Disorders of sex development (DSDs) are congenital conditions in which the development of chromosomal, gonadal, or anatomic sex is atypical. These can be classified broadly into four categories on the basis of gonadal histologic features: female pseudohermaphroditism (46,XX with two ovaries); male pseudohermaphroditism (46,XY with two testes); true hermaphroditism (ovotesticular DSD) (both ovarian and testicular tissues); and gonadal dysgenesis, either mixed (a testis and a streak gonad) or pure (bilateral streak gonads). Imaging plays an important role in demonstrating the anatomy and associated anomalies. Ultrasonography is the primary modality for demonstrating internal organs and magnetic resonance imaging is used as an adjunct modality to assess for internal gonads and genitalia. Early and appropriate gender assignment is necessary for healthy physical and psychologic development of children with ambiguous genitalia. Gender assignment can be facilitated with a team approach that involves a pediatric endocrinologist, geneticist, urologist, psychiatrist, social worker, neonatologist, nurse, and radiologist, allowing timely diagnosis and proper management. We describe case series on ambiguous genitalia presented to our department who were evaluated with multiple imaging modalities.

  17. Establishment study of the in vivo imaging analysis with small animal imaging modalities for bio-durg development

    International Nuclear Information System (INIS)

    Jang, Beomsu; Park, Sanghyeon; Choi, Dae Seong; Park, Jeonghoon; Jung, Uhee; Lee, Yun Jong

    2012-01-01

    In this study, we established the image modalities (micro-PET, SPECT/CT) using the experimental animal (mouse) for the development of imaging assessment method for the bio-durg and extramural collaboration proposal. We examined the micro-SPECT/CT, PET imaging study using the Siemens Inveon micro-multimodality system (SPECT/CT) and imaging study using the Siemens Inveon micro-multimodality system (SPECT/CT) and micro-PET with 99m Tc tricarbonyl bifunctional chelators and 18 F-clotrimazole derivative. SPECT imaging studies were performed with 99m Tc tricarbonyl BFCs. PET imaging study was performed with 18 F-clotrimazole derivatives. We performed the PET image study of 18 F-clotrimazole derivatives using U87MG tumor bearing mice. Also we tested the intramural and extramural collaboration using small animal imaging modalities and prepared the draft of extramural R and D operation manual for small animal imaging modalities and the experimental animal imaging facility. These research results can be utilized as a basic image study protocols and data for the image assessment of drugs including biological drug

  18. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng; Xie, Qing; Zhu, Yonghua; Liu, Xingyi; Zhang, Shichao

    2015-01-01

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple

  19. Multispectral analytical image fusion

    International Nuclear Information System (INIS)

    Stubbings, T.C.

    2000-04-01

    With new and advanced analytical imaging methods emerging, the limits of physical analysis capabilities and furthermore of data acquisition quantities are constantly pushed, claiming high demands to the field of scientific data processing and visualisation. Physical analysis methods like Secondary Ion Mass Spectrometry (SIMS) or Auger Electron Spectroscopy (AES) and others are capable of delivering high-resolution multispectral two-dimensional and three-dimensional image data; usually this multispectral data is available in form of n separate image files with each showing one element or other singular aspect of the sample. There is high need for digital image processing methods enabling the analytical scientist, confronted with such amounts of data routinely, to get rapid insight into the composition of the sample examined, to filter the relevant data and to integrate the information of numerous separate multispectral images to get the complete picture. Sophisticated image processing methods like classification and fusion provide possible solution approaches to this challenge. Classification is a treatment by multivariate statistical means in order to extract analytical information. Image fusion on the other hand denotes a process where images obtained from various sensors or at different moments of time are combined together to provide a more complete picture of a scene or object under investigation. Both techniques are important for the task of information extraction and integration and often one technique depends on the other. Therefore overall aim of this thesis is to evaluate the possibilities of both techniques regarding the task of analytical image processing and to find solutions for the integration and condensation of multispectral analytical image data in order to facilitate the interpretation of the enormous amounts of data routinely acquired by modern physical analysis instruments. (author)

  20. Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring

    DEFF Research Database (Denmark)

    Alldieck, Thiemo; Bahnsen, Chris Holmberg; Moeslund, Thomas B.

    2016-01-01

    In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper...... introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two...

  1. Complimentary Advanced Fusion Exploration

    National Research Council Canada - National Science Library

    Alford, Mark G; Jones, Eric C; Bubalo, Adnan; Neumann, Melissa; Greer, Michael J

    2005-01-01

    .... The focus areas were in the following regimes: multi-tensor homographic computer vision image fusion, out-of-sequence measurement and track data handling, Nash bargaining approaches to sensor management, pursuit-evasion game theoretic modeling...

  2. Imaging of congenital heart disease in adults: choice of modalities.

    Science.gov (United States)

    Orwat, Stefan; Diller, Gerhard-Paul; Baumgartner, Helmut

    2014-01-01

    Major advances in noninvasive imaging of adult congenital heart disease have been accomplished. These tools play now a key role in comprehensive diagnostic work-up, decision for intervention, evaluation for the suitability of specific therapeutic options, monitoring of interventions and regular follow-up. Besides echocardiography, magnetic resonance (CMR) and computed tomography (CT) have gained particular importance. The choice of imaging modality has thus become a critical issue. This review summarizes strengths and limitations of the different imaging modalities and how they may be used in a complementary fashion. Echocardiography obviously remains the workhorse of imaging routinely used in all patients. However, in complex disease and after surgery echocardiography alone frequently remains insufficient. CMR is particularly useful in this setting and allows reproducible and accurate quantification of ventricular function and comprehensive assessment of cardiac anatomy, aorta, pulmonary arteries and venous return including complex flow measurements. CT is preferred when CMR is contraindicated, when superior spatial resolution is required or when "metallic" artefacts limit CMR imaging. In conclusion, the use of currently available imaging modalities in adult congenital heart disease needs to be complementary. Echocardiography remains the basis tool, CMR and CT should be added considering specific open questions and the ability to answer them, availability and economic issues.

  3. Imaging fusion (SPECT/CT) in degenerative disease of spine

    International Nuclear Information System (INIS)

    Bernal, P.; Ucros, G.; Bermudez, S.; Ocampo, M.

    2007-01-01

    Full text: Objective: To determine the utility of Fusion Imaging SPECT/CT in degenerative pathology of the spine and to establish the impact of the use of fusion imaging in spinal pain due to degenerative changes of the spine. Materials and methods: 44 Patients (M=21, F=23) average age of 63 years and with degenerative pathology of spine were sent to Diagnosis Imaging department in FSFB. Bone scintigraphy (SPECT), CT of spine (cervical: 30%, Lumbar 70%) and fusion imaging were performed in all of them. Bone scintigraphy was carried out in a gamma camera Siemens Diacam double head attached to ESOFT computer. The images were acquired in matrix 128 x 128, 20 seg/imag, 64 images. CT of spine was performed same day or two days after in Helycoidal Siemens somatom emotion CT. The fusion was done in a Dicom workstation in sagital, axial and coronal reconstruction. The findings were evaluated by 2 Nuclear Medicine physicians and 2 radiologists of the staff of FSFB in an independent way. Results: Bone scan (SPECT) and CT of 44 patients were evaluated. CT showed facet joint osteoarthrities in 27 (61.3%) patients, uncovertebral joint arthrosis in 7 (15.9%), bulging disc in 9(20.4%), spinal nucleus lesion in 7(15.9%), osteophytes in 9 (20.4%), spinal foraminal stenosis in 7 (15.9%), spondylolysis/spondylolisthesis in 4 (9%). Bone scan showed facet joint osteoarthrities in 29 (65.9%), uncovertebral joint arthrosis in 4 (9%), osteophytes in 9 (20.4%) and normal 3 (6.8%). The imaging fusion showed coincidence findings (main lesion in CT with high uptake in scintigraphy) in 34 patients (77.2%) and no coincidence in 10 (22.8%). In 15 (34.09%) patients the fusion provided additional information. The analysis of the findings of CT and SPECT showed similar results in most of the cases and the fusion didn't provide additional information but it allowed to confirm the findings but when the findings didn't match where the CT showed several findings and SPECT only one area with high uptake

  4. Image fusion using MIM software via picture archiving and communication system

    International Nuclear Information System (INIS)

    Gu Zhaoxiang; Jiang Maosong

    2001-01-01

    The preliminary studies of the multimodality image registration and fusion were performed using an image fusion software and a picture archiving and communication system (PACS) to explore the methodology. Original image voluminal data were acquired with a CT scanner, MR and dual-head coincidence SPECT, respectively. The data sets from all imaging devices were queried, retrieved, transferred and accessed via DICOM PACS. The image fusion was performed at the SPECT ICON work-station, where the MIM (Medical Image Merge) fusion software was installed. The images were created by re-slicing original volume on the fly. The image volumes were aligned by translation and rotation of these view ports with respect to the original volume orientation. The transparency factor and contrast were adjusted in order that both volumes can be visualized in the merged images. The image volume data of CT, MR and nuclear medicine were transferred, accessed and loaded via PACS successfully. The perfect fused images of chest CT/ 18 F-FDG and brain MR/SPECT were obtained. These results showed that image fusion technique using PACS was feasible and practical. Further experimentation and larger validation studies were needed to explore the full potential of the clinical use

  5. Alternate method for to realize image fusion; Metodo alterno para realizar fusion de imagenes

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, L; Hernandez, F; Fernandez, R [Departamento de Medicina Nuclear, Imagenologia Diagnostica. Centro Medico de Xalapa, Veracruz (Mexico)

    2005-07-01

    At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)

  6. Advanced concepts in multi-dimensional radiation detection and imaging

    International Nuclear Information System (INIS)

    Vetter, Kai; Barnowski, Ross; Pavlovsky, Ryan; Haefner, Andy; Torii, Tatsuo; Shikaze, Yoshiaki; Sanada, Yukihisa

    2016-01-01

    Recent developments in the detector fabrication, signal readout, and data processing enable new concepts in radiation detection that are relevant for applications ranging from fundamental physics to medicine as well as nuclear security and safety. We present recent progress in multi-dimensional radiation detection and imaging in the Berkeley Applied Nuclear Physics program. It is based on the ability to reconstruct scenes in three dimensions and fuse it with gamma-ray image information. We are using the High-Efficiency Multimode Imager HEMI in its Compton imaging mode and combining it with contextual sensors such as the Microsoft Kinect or visual cameras. This new concept of volumetric imaging or scene data fusion provides unprecedented capabilities in radiation detection and imaging relevant for the detection and mapping of radiological and nuclear materials. This concept brings us one step closer to the seeing the world with gamma-ray eyes. (author)

  7. Seizure Onset Detection based on a Uni- or Multi-modal Intelligent Seizure Acquisition (UISA/MISA) System

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter

    2010-01-01

    An automatic Uni- or Multi-modal Inteligent Seizure Acquisition (UISA/MISA) system is highly applicable for onset detection of epileptic seizures based on motion data. The modalities used are surface electromyography (sEMG), acceleration (ACC) and angular velocity (ANG). The new proposed automatic...... algorithm on motion data is extracting features as “log-sum” measures of discrete wavelet components. Classification into the two groups “seizure” versus “nonseizure” is made based on the support vector machine (SVM) algorithm. The algorithm performs with a sensitivity of 91-100%, a median latency of 1...... second and a specificity of 100% on multi-modal data from five healthy subjects simulating seizures. The uni-modal algorithm based on sEMG data from the subjects and patients performs satisfactorily in some cases. As expected, our results clearly show superiority of the multimodal approach, as compared...

  8. Mixing of multi-modal images for conformational radiotherapy: application to patient re positioning; Fusion d`images multi-modales pour la radiotherapie conformationnelle: application au repositionnement du patient

    Energy Technology Data Exchange (ETDEWEB)

    Vassal, P

    1998-06-29

    This study shows a procedure of patient re positioning by comparative evaluation between the position of the patient and the theoretical expected position of the patient; the stagger between the two information gives the error of installation in translation and rotation. A correction is calculated and applied to the treatment environment (position and orientation of the patient, position and orientation of the irradiation source). The control system allows to determine the precise position of the tumor volume. The echography allows to determine the position of an organ.The surface captor is used to localize the tumors of the face or of the brain, with the respect to the face, the acquisition and the treatment of information take only few minutes. The X rays imager is applied for the localization of bones structures. (N.C.)

  9. Multi-modality imaging findings of huge intrachoroidal cavitation and myopic peripapillary sinkhole.

    Science.gov (United States)

    Chen, Yutong; Ma, Xiaoli; Hua, Rui

    2018-02-02

    Peripapillary intrachoroidal cavitation was described as the presence of an asymptomatic, well-circumscribed, yellow-orange, peripapillary lesion at the inferior border of the myopic conus in eyes with high myopia. A 66-year-old myopic Chinese man was enrolled and his multi-color imaging examination showed a well-circumscribed, caesious, peripapillary lesion coalesced with the optic nerve head vertically rotated and obliquely tilted, together with an inferotemporal sinkhole in the myopic conus. The optical coherence tomography images showed an intrachoroidal hyporeflective space, schisis, an intracavitary septum located below the retinal pigment epithelium and inserted beneath the optic nerve head, as well as a sinkhole between the peripapillary intrachoroidal cavitation and the vitreous space. Both myopic colobomas and sinkhole in myopic conus may contribute the coalescence of intrachoroidal cavitation with optic nerve head. These qualitative and quantitative new findings will be beneficial for understanding its pathomorphological mechanism, and the impact on optic nerve tissue of myopic patients.

  10. Performance evaluation of a compact PET/SPECT/CT tri-modality system for small animal imaging applications

    International Nuclear Information System (INIS)

    Wei, Qingyang; Wang, Shi; Ma, Tianyu; Wu, Jing; Liu, Hui; Xu, Tianpeng; Xia, Yan; Fan, Peng; Lyu, Zhenlei; Liu, Yaqiang

    2015-01-01

    PET, SPECT and CT imaging techniques are widely used in preclinical small animal imaging applications. In this paper, we present a compact small animal PET/SPECT/CT tri-modality system. A dual-functional, shared detector design is implemented which enables PET and SPECT imaging with a same LYSO ring detector. A multi-pinhole collimator is mounted on the system and inserted into the detector ring in SPECT imaging mode. A cone-beam CT consisting of a micro focus X-ray tube and a CMOS detector is implemented. The detailed design and the performance evaluations are reported in this paper. In PET imaging mode, the measured NEMA based spatial resolution is 2.12 mm (FWHM), and the sensitivity at the central field of view (CFOV) is 3.2%. The FOV size is 50 mm (∅)×100 mm (L). The SPECT has a spatial resolution of 1.32 mm (FWHM) and an average sensitivity of 0.031% at the center axial, and a 30 mm (∅)×90 mm (L) FOV. The CT spatial resolution is 8.32 lp/mm @10%MTF, and the contrast discrimination function value is 2.06% with 1.5 mm size cubic box object. In conclusion, a compact, tri-modality PET/SPECT/CT system was successfully built with low cost and high performance

  11. Holographic Raman tweezers controlled by multi-modal natural user interface

    International Nuclear Information System (INIS)

    Tomori, Zoltán; Keša, Peter; Nikorovič, Matej; Valušová, Eva; Antalík, Marián; Kaňka, Jan; Jákl, Petr; Šerý, Mojmír; Bernatová, Silvie; Zemánek, Pavel

    2016-01-01

    Holographic optical tweezers provide a contactless way to trap and manipulate several microobjects independently in space using focused laser beams. Although the methods of fast and efficient generation of optical traps are well developed, their user friendly control still lags behind. Even though several attempts have appeared recently to exploit touch tablets, 2D cameras, or Kinect game consoles, they have not yet reached the level of natural human interface. Here we demonstrate a multi-modal ‘natural user interface’ approach that combines finger and gaze tracking with gesture and speech recognition. This allows us to select objects with an operator’s gaze and voice, to trap the objects and control their positions via tracking of finger movement in space and to run semi-automatic procedures such as acquisition of Raman spectra from preselected objects. This approach takes advantage of the power of human processing of images together with smooth control of human fingertips and downscales these skills to control remotely the motion of microobjects at microscale in a natural way for the human operator. (paper)

  12. Development of EndoTOFPET-US, a multi-modal endoscope for ultrasound and time of flight positron emission tomography

    International Nuclear Information System (INIS)

    Pizzichemi, M

    2014-01-01

    The EndoTOFPET-US project aims at delevoping a multi-modal imaging device that combines Ultrasound with Time-Of-Flight Positron Emission Tomography into an endoscopic imaging device. The goal is to obtain a coincidence time resolution of about 200 ps FWHM and sub-millimetric spatial resolution for the PET head, integrating the components in a very compact detector suitable for endoscopic use. The scanner will be exploited for the clinical test of new bio-markers especially targeted for prostate and pancreatic cancer as well as for diagnostic and surgical oncology. This paper focuses on the status of the Time-Of-Flight Positron Emission Tomograph under development for the EndoTOFPET-US project

  13. Image enhancement using thermal-visible fusion for human detection

    Science.gov (United States)

    Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd

    2017-09-01

    An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.

  14. Recognition of Wheat Spike from Field Based Phenotype Platform Using Multi-Sensor Fusion and Improved Maximum Entropy Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Chengquan Zhou

    2018-02-01

    Full Text Available To obtain an accurate count of wheat spikes, which is crucial for estimating yield, this paper proposes a new algorithm that uses computer vision to achieve this goal from an image. First, a home-built semi-autonomous multi-sensor field-based phenotype platform (FPP is used to obtain orthographic images of wheat plots at the filling stage. The data acquisition system of the FPP provides high-definition RGB images and multispectral images of the corresponding quadrats. Then, the high-definition panchromatic images are obtained by fusion of three channels of RGB. The Gram–Schmidt fusion algorithm is then used to fuse these multispectral and panchromatic images, thereby improving the color identification degree of the targets. Next, the maximum entropy segmentation method is used to do the coarse-segmentation. The threshold of this method is determined by a firefly algorithm based on chaos theory (FACT, and then a morphological filter is used to de-noise the coarse-segmentation results. Finally, morphological reconstruction theory is applied to segment the adhesive part of the de-noised image and realize the fine-segmentation of the image. The computer-generated counting results for the wheat plots, using independent regional statistical function in Matlab R2017b software, are then compared with field measurements which indicate that the proposed method provides a more accurate count of wheat spikes when compared with other traditional fusion and segmentation methods mentioned in this paper.

  15. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    Science.gov (United States)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  16. Radiation dose reduction and new image modalities development for interventional C-arm imaging system

    Science.gov (United States)

    Niu, Kai

    Cardiovascular disease and stroke are the leading health problems and causes of death in the US. Due to the minimally invasive nature of the evolution of image guided techniques, interventional radiological procedures are becoming more common and are preferred in treating many cardiovascular diseases and strokes. In addition, with the recent advances in hardware and device technology, the speed and efficacy of interventional treatment has significantly improved. This implies that more image modalities can be developed based on the current C-arm system and patients treated in interventional suites can potentially experience better health outcomes. However, during the treatment patients are irradiated with substantial amounts of ionizing radiation with a high dose rate (digital subtraction angiography (DSA) with 3muGy/frame and 3D cone beam CT image with 0.36muGy/frame for a Siemens Artis Zee biplane system) and/or a long irradiation time (a roadmapping image sequence can be as long as one hour during aneurysm embolization). As a result, the patient entrance dose is extremely high. Despite the fact that the radiation dose is already substantial, image quality is not always satisfactory. By default a temporal average is used in roadmapping images to overcome poor image quality, but this technique can result in motion blurred images. Therefore, reducing radiation dose while maintaining or even improving the image quality is an important area for continued research. This thesis is focused on improving the clinical applications of C-arm cone beam CT systems in two ways: (1) Improve the performance of current image modalities on the C-arm system. (2) Develop new image modalities based on the current system. To be more specific, the objectives are to reduce radiation dose for current modalities (e.g., DSA, fluoroscopy, roadmapping, and cone beam CT) and enable cone beam CT perfusion and time resolved cone beam CT angiography that can be used to diagnose and triage acute

  17. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  18. Multi-Level Sensor Fusion Algorithm Approach for BMD Interceptor Applications

    National Research Council Canada - National Science Library

    Allen, Doug

    1998-01-01

    ... through fabrication and testing of advanced sensor hardware concepts and advanced sensor fusion algorithms. Advanced sensor concepts include onboard LADAR in conjunction with a multi-color passive IR sensor...

  19. Performance evaluation of multi-sensor data-fusion systems

    Indian Academy of Sciences (India)

    In this paper, the utilization of multi-sensors of different types, their characteristics, and their data-fusion in launch vehicles to achieve the goal of injecting the satellite into a precise orbit is explained. Performance requirements of sensors and their redundancy management in a typical launch vehicle are also included.

  20. Evaluation of multimodality imaging using image fusion with ultrasound tissue elasticity imaging in an experimental animal model.

    Science.gov (United States)

    Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A

    2014-01-01

    To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a

  1. Neutron penumbral imaging of laser-fusion targets

    International Nuclear Information System (INIS)

    Lerche, R.A.; Ress, D.B.

    1988-01-01

    Using a new technique, penumbral coded-aperture imaging, the first neutron images of laser-driven, inertial-confinement fusion targets were obtained. With these images the deuterium-tritium burn region within a compressed target can be measured directly. 4 references, 11 figures

  2. Multisensor fusion in gastroenterology domain through video and echo endoscopic image combination: a challenge

    Science.gov (United States)

    Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian

    2001-08-01

    Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.

  3. T2*-weighted image/T2-weighted image fusion in postimplant dosimetry of prostate brachytherapy

    International Nuclear Information System (INIS)

    Katayama, Norihisa; Takemoto, Mitsuhiro; Yoshio, Kotaro

    2011-01-01

    Computed tomography (CT)/magnetic resonance imaging (MRI) fusion is considered to be the best method for postimplant dosimetry of permanent prostate brachytherapy; however, it is inconvenient and costly. In T2 * -weighted image (T2 * -WI), seeds can be easily detected without the use of an intravenous contrast material. We present a novel method for postimplant dosimetry using T2 * -WI/T2-weighted image (T2-WI) fusion. We compared the outcomes of T2 * -WI/T2-WI fusion-based and CT/T2-WI fusion-based postimplant dosimetry. Between April 2008 and July 2009, 50 consecutive prostate cancer patients underwent brachytherapy. All the patients were treated with 144 Gy of brachytherapy alone. Dose-volume histogram (DVH) parameters (prostate D90, prostate V100, prostate V150, urethral D10, and rectal D2cc) were prospectively compared between T2 * -WI/T2-WI fusion-based and CT/T2-WI fusion-based dosimetry. All the DVH parameters estimated by T2 * -WI/T2-WI fusion-based dosimetry strongly correlated to those estimated by CT/T2-WI fusion-based dosimetry (0.77≤ R ≤0.91). No significant difference was observed in these parameters between the two methods, except for prostate V150 (p=0.04). These results show that T2 * -WI/T2-WI fusion-based dosimetry is comparable or superior to MRI-based dosimetry as previously reported, because no intravenous contrast material is required. For some patients, rather large differences were observed in the value between the 2 methods. We thought these large differences were a result of seed miscounts in T2 * -WI and shifts in fusion. Improving the image quality of T2 * -WI and the image acquisition speed of T2 * -WI and T2-WI may decrease seed miscounts and fusion shifts. Therefore, in the future, T2 * -WI/T2-WI fusion may be more useful for postimplant dosimetry of prostate brachytherapy. (author)

  4. Research and Realization of Medical Image Fusion Based on Three-Dimensional Reconstruction

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new medical image fusion technique is presented. The method is based on three-dimensional reconstruction. After reconstruction, the three-dimensional volume data is normalized by three-dimensional coordinate conversion in the same way and intercepted through setting up cutting plane including anatomical structure, as a result two images in entire registration on space and geometry are obtained and the images are fused at last.Compared with traditional two-dimensional fusion technique, three-dimensional fusion technique can not only resolve the different problems existed in the two kinds of images, but also avoid the registration error of the two kinds of images when they have different scan and imaging parameter. The research proves this fusion technique is more exact and has no registration, so it is more adapt to arbitrary medical image fusion with different equipments.

  5. Abdominal Organ Location, Morphology, and Rib Coverage for the 5(th), 50(th), and 95(th) Percentile Males and Females in the Supine and Seated Posture using Multi-Modality Imaging.

    Science.gov (United States)

    Hayes, Ashley R; Gayzik, F Scott; Moreno, Daniel P; Martin, R Shayn; Stitzel, Joel D

    The purpose of this study was to use data from a multi-modality image set of males and females representing the 5(th), 50(th), and 95(th) percentile (n=6) to examine abdominal organ location, morphology, and rib coverage variations between supine and seated postures. Medical images were acquired from volunteers in three image modalities including Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and upright MRI (uMRI). A manual and semi-automated segmentation method was used to acquire data and a registration technique was employed to conduct a comparative analysis between abdominal organs (liver, spleen, and kidneys) in both postures. Location of abdominal organs, defined by center of gravity movement, varied between postures and was found to be significant (p=0.002 to p=0.04) in multiple directions for each organ. In addition, morphology changes, including compression and expansion, were seen in each organ as a result of postural changes. Rib coverage, defined as the projected area of the ribs onto the abdominal organs, was measured in frontal, lateral, and posterior projections, and also varied between postures. A significant change in rib coverage between postures was measured for the spleen and right kidney (p=0.03 and p=0.02). The results indicate that posture affects the location, morphology and rib coverage area of abdominal organs and these implications should be noted in computational modeling efforts focused on a seated posture.

  6. Optical imaging modalities: From design to diagnosis of skin cancer

    Science.gov (United States)

    Korde, Vrushali Raj

    This study investigates three high resolution optical imaging modalities to better detect and diagnose skin cancer. The ideal high resolution optical imaging system can visualize pre-malignant tissue growth non-invasively with resolution comparable to histology. I examined 3 modalities which approached this goal. The first method examined was high magnification microscopy of thin stained tissue sections, together with a statistical analysis of nuclear chromatin patterns termed Karyometry. This method has subcellular resolution, but it necessitates taking a biopsy at the desired tissue site and imaging the tissue ex-vivo. My part of this study was to develop an automated nuclear segmentation algorithm to segment cell nuclei in skin histology images for karyometric analysis. The results of this algorithm were compared to hand segmented cell nuclei in the same images, and it was concluded that the automated segmentations can be used for karyometric analysis. The second optical imaging modality I investigated was Optical Coherence Tomography (OCT). OCT is analogous to ultrasound, in which sound waves are delivered into the body and the echo time and reflected signal magnitude are measured. Due to the fast speed of light and detector temporal integration times, low coherence interferometry is needed to gate the backscattered light. OCT acquires cross sectional images, and has an axial resolution of 1-15 mum (depending on the source bandwidth) and a lateral resolution of 10-20 mum (depending on the sample arm optics). While it is not capable of achieving subcellular resolution, it is a non-invasive imaging modality. OCT was used in this study to evaluate skin along a continuum from normal to sun damaged to precancer. I developed algorithms to detect statistically significant differences between images of sun protected and sun damaged skin, as well as between undiseased and precancerous skin. An Optical Coherence Microscopy (OCM) endoscope was developed in the third

  7. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    Directory of Open Access Journals (Sweden)

    Lu Ping-ping

    2014-06-01

    Full Text Available Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extraction algorithm based on information fusion, which considers ratio and direction information, is also proposed. By processing Radon transformation, main road directions can be extracted. Cross interferences can be suppressed, and the road continuity can then be improved by the main direction alignment and secondary road extraction. The HJ-1-C SAR image acquired in Wuhan, China was used to evaluate the proposed method. The experimental results show good performance with correctness (80.5% and quality (70.1% when applied to a SAR image with complex content.

  8. Multimodality imaging spectrum of complications of horseshoe kidney

    Directory of Open Access Journals (Sweden)

    Hardik U Shah

    2017-01-01

    Full Text Available Horseshoe kidney is the most common congenital renal fusion anomaly with an incidence of 1 in 400–600 individuals. The most common type is fusion at the lower poles seen in greater than 90% of the cases, with the rest depicting fusion at the upper poles, resulting in an inverted horseshoe kidney. Embryologically, there are two theories hypothesizing the genesis of horseshoe kidney – mechanical fusion theory and teratogenic event theory. As an entity, horseshoe kidney is an association of two anatomic anomalies, namely, ectopia and malrotation. It is also associated with other anomalies including vascular, calyceal, and ureteral anomalies. Horseshoe kidney is prone to a number of complications due to its abnormal position as well as due to associated vascular and ureteral anomalies. Complications associated with horseshoe kidney include pelviureteric junction obstruction, renal stones, infection, tumors, and trauma. It can also be associated with abnormalities of cardiovascular, central nervous, musculoskeletal and genitourinary systems, as well as chromosomal abnormalities. Conventional imaging modalities (plain films, intravenous urogram as well as advanced cross-sectional imaging modalities (ultrasound, computed tomography, and magnetic resonance imaging play an important role in the evaluation of horseshoe kidney. This article briefly describes the embryology and anatomy of the horseshoe kidney, enumerates appropriate imaging modalities used for its evaluation, and reviews cross-sectional imaging features of associated complications.

  9. MR imaging of recurrent hyperparathyroidism in comparison with other imaging modalities

    International Nuclear Information System (INIS)

    Auffermann, W.; Thurnher, S.; Okerland, M.; Levin, K.; Gooding, G.W.; Clark, O.H.; Higgins, C.B.

    1987-01-01

    Thirty patients with recurrent hyperparathyroidism were evaluated with MR imaging, performed using a saddle-shaped surface coil producing 5-mm sections with T1 and T2 weighting. Twenty-six and 22 of these patients also underwent T1-201 scintigraphy and high-resolution US, respectively. MR imaging accurately localized abnormal parathyroid glands in 75% evaluated prospectively and 86% retrospectively. Scintigraphy localized 64% prospectively and 72% retrospectively. US demonstrated 57% prospectively and 67% retrospectively. MR imaging showed three of four mediastinal adenomas evaluated prospectively retrospectively. There were two false-positive studies with MR imaging, two with scintigraphy, and one with US. Thus, MR imaging was the most effective imaging modality for parathyroid localization in recurrent hyperparathyroidism

  10. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-01-01

    Full Text Available Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT, the fast discrete curvelet transform (FDCT, and the dual tree complex wavelet transform (DTCWT based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.

  11. New false color mapping for image fusion

    NARCIS (Netherlands)

    Toet, A.; Walraven, J.

    1996-01-01

    A pixel based colour mapping algorithm is presented that produces a fused false colour rendering of two gray level images representing different sensor modalities. The result-ing fused false colour images have a higher information content than each of the original images and retain sensor-specific

  12. Evaluation of polynomial image deformation for matching of 3D- abdominal MR-images using anatomical landmarks and for atlas construction

    CERN Document Server

    Kimiaei, S; Jonsson, E; Crafoord, J; Maguire, G Q

    1999-01-01

    The aim of this study is to compare and evaluate the potential usability of linear and non-linear (polynomial) 3D-warping for constructing an atlas by matching abdominal MR-images from a number of different individuals using manually picked anatomical landmarks. The significance of this study lies in the fact that it illustrates the potential to use polynomial matching at a local or organ level. This is a necessary requirement for constructing an atlas and for fine intra-patient image matching and fusion. Finally 3D-image warping using anatomical landmark for inter-patient intra-modality image co-registration and fusion was found to be a very powerful and robust method. Additionally it can be used for intra-patient inter- modality image matching.

  13. Impact of Medical Therapy on Atheroma Volume Measured by Different Cardiovascular Imaging Modalities

    Directory of Open Access Journals (Sweden)

    Mohamad C. N. Sinno

    2010-01-01

    Full Text Available Atherosclerosis is a systemic disease that affects most vascular beds. The gold standard of atherosclerosis imaging has been invasive intravascular ultrasound (IVUS. Newer noninvasive imaging modalities like B-mode ultrasound, cardiac computed tomography (CT, positron emission tomography (PET, and magnetic resonance imaging (MRI have been used to assess these vascular territories with high accuracy and reproducibility. These imaging modalities have lately been used for the assessment of the atherosclerotic plaque and the response of its volume to several medical therapies used in the treatment of patients with cardiovascular disease. To study the impact of these medications on atheroma volume progression or regression, imaging modalities have been used on a serial basis providing a unique opportunity to monitor the effect these antiatherosclerotic strategies exert on plaque burden. As a result, studies incorporating serial IVUS imaging, quantitative coronary angiography (QCA, B-mode ultrasound, electron beam computed tomography (EBCT, and dynamic contrast-enhanced magnetic resonance imaging have all been used to evaluate the impact of therapeutic strategies that modify cholesterol and blood pressure on the progression/regression of atherosclerotic plaque. In this review, we intend to summarize the impact of different therapies aimed at halting the progression or even result in regression of atherosclerotic cardiovascular disease evaluated by different imaging modalities.

  14. Graduate Student Perceptions of Multi-Modal Tablet Use in Academic Environments

    Science.gov (United States)

    Bryant, Ezzard C., Jr.

    2016-01-01

    The purpose of this study was to explore graduate student perceptions of use and the ease of use of multi-modal tablets to access electronic course materials, and the perceived differences based on students' gender, age, college of enrollment, and previous experience. This study used the Unified Theory of Acceptance and Use of Technology to…

  15. The establishment of the method of three dimension volumetric fusion of emission and transmission images for PET imaging

    International Nuclear Information System (INIS)

    Zhang Xiangsong; He Zuoxiang

    2004-01-01

    Objective: To establish the method of three dimension volumetric fusion of emission and transmission images for PET imaging. Methods: The volume data of emission and transmission images acquired with Siemens ECAT HR + PET scanner were transferred to PC computer by local area network. The PET volume data were converted into 8 bit byte type, and scaled to the range of 0-255. The data coordinates of emission and transmission images were normalized by three-dimensional coordinate conversion in the same way. The images were fused with the mode of alpha-blending. The accuracy of image fusion was confirmed by its clinical application in 13 cases. Results: The three dimension volumetric fusion of emission and transmission images clearly displayed the silhouette and anatomic configuration in chest, including chest wall, lung, heart, mediastinum, et al. Forty-eight lesions in chest in 13 cases were accurately located by the image fusion. Conclusions: The volume data of emission and transmission images acquired with Siemens ECAT HR + PET scanner have the same data coordinate. The three dimension fusion software can conveniently used for the three dimension volumetric fusion of emission and transmission images, and also can correctly locate the lesions in chest

  16. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    Science.gov (United States)

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis

  17. Effective Multifocus Image Fusion Based on HVS and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-01-01

    Full Text Available The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS and back propagation (BP neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.

  18. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  19. Analyzer-based imaging of spinal fusion in an animal model

    International Nuclear Information System (INIS)

    Kelly, M E; Beavis, R C; Allen, L A; Fiorella, David; Schueltke, E; Juurlink, B H; Chapman, L D; Zhong, Z

    2008-01-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs

  20. Analyzer-based imaging of spinal fusion in an animal model

    Science.gov (United States)

    Kelly, M. E.; Beavis, R. C.; Fiorella, David; Schültke, E.; Allen, L. A.; Juurlink, B. H.; Zhong, Z.; Chapman, L. D.

    2008-05-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs.

  1. Multi-layered nanoparticles for penetrating the endosome and nuclear membrane via a step-wise membrane fusion process.

    Science.gov (United States)

    Akita, Hidetaka; Kudo, Asako; Minoura, Arisa; Yamaguti, Masaya; Khalil, Ikramy A; Moriguchi, Rumiko; Masuda, Tomoya; Danev, Radostin; Nagayama, Kuniaki; Kogure, Kentaro; Harashima, Hideyoshi

    2009-05-01

    Efficient targeting of DNA to the nucleus is a prerequisite for effective gene therapy. The gene-delivery vehicle must penetrate through the plasma membrane, and the DNA-impermeable double-membraned nuclear envelope, and deposit its DNA cargo in a form ready for transcription. Here we introduce a concept for overcoming intracellular membrane barriers that involves step-wise membrane fusion. To achieve this, a nanotechnology was developed that creates a multi-layered nanoparticle, which we refer to as a Tetra-lamellar Multi-functional Envelope-type Nano Device (T-MEND). The critical structural elements of the T-MEND are a DNA-polycation condensed core coated with two nuclear membrane-fusogenic inner envelopes and two endosome-fusogenic outer envelopes, which are shed in stepwise fashion. A double-lamellar membrane structure is required for nuclear delivery via the stepwise fusion of double layered nuclear membrane structure. Intracellular membrane fusions to endosomes and nuclear membranes were verified by spectral imaging of fluorescence resonance energy transfer (FRET) between donor and acceptor fluorophores that had been dually labeled on the liposome surface. Coating the core with the minimum number of nucleus-fusogenic lipid envelopes (i.e., 2) is essential to facilitate transcription. As a result, the T-MEND achieves dramatic levels of transgene expression in non-dividing cells.

  2. Development of a multi-scale and multi-modality imaging system to characterize tumours and their microenvironment in vivo

    Science.gov (United States)

    Rouffiac, Valérie; Ser-Leroux, Karine; Dugon, Emilie; Leguerney, Ingrid; Polrot, Mélanie; Robin, Sandra; Salomé-Desnoulez, Sophie; Ginefri, Jean-Christophe; Sebrié, Catherine; Laplace-Builhé, Corinne

    2015-03-01

    In vivo high-resolution imaging of tumor development is possible through dorsal skinfold chamber implantable on mice model. However, current intravital imaging systems are weakly tolerated along time by mice and do not allow multimodality imaging. Our project aims to develop a new chamber for: 1- long-term micro/macroscopic visualization of tumor (vascular and cellular compartments) and tissue microenvironment; and 2- multimodality imaging (photonic, MRI and sonography). Our new experimental device was patented in March 2014 and was primarily assessed on 75 mouse engrafted with 4T1-Luc tumor cell line, and validated in confocal and multiphoton imaging after staining the mice vasculature using Dextran 155KDa-TRITC or Dextran 2000kDa-FITC. Simultaneously, a universal stage was designed for optimal removal of respiratory and cardiac artifacts during microscopy assays. Experimental results from optical, ultrasound (Bmode and pulse subtraction mode) and MRI imaging (anatomic sequences) showed that our patented design, unlike commercial devices, improves longitudinal monitoring over several weeks (35 days on average against 12 for the commercial chamber) and allows for a better characterization of the early and late tissue alterations due to tumour development. We also demonstrated the compatibility for multimodality imaging and the increase of mice survival was by a factor of 2.9, with our new skinfold chamber. Current developments include: 1- defining new procedures for multi-labelling of cells and tissue (screening of fluorescent molecules and imaging protocols); 2- developing ultrasound and MRI imaging procedures with specific probes; 3- correlating optical/ultrasound/MRI data for a complete mapping of tumour development and microenvironment.

  3. Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR

    Science.gov (United States)

    Sidorchuk, D.; Volkov, V.; Gladilin, S.

    2018-04-01

    This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.

  4. MO-B-BRC-00: Prostate HDR Treatment Planning - Considering Different Imaging Modalities

    International Nuclear Information System (INIS)

    2016-01-01

    Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR is U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions

  5. MO-B-BRC-00: Prostate HDR Treatment Planning - Considering Different Imaging Modalities

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR is U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.

  6. External marker-based fusion of functional and morphological images

    International Nuclear Information System (INIS)

    Kremp, S.; Schaefer, A.; Alexander, C.; Kirsch, C.M.

    1999-01-01

    The fusion of image data resulting from methods oriented toward morphology like CT, MRI with functional information coming from nuclear medicine (SPECT, PET) is frequently applied to allow for a better association between functional findings and anatomical structures. A new software was developed to provide image fusion using PET, SPECT, MRI and CT data within a short processing periode for brain as well as whole body examinations in particular thorax and abdomen. The software utilizes external markers (brain) or anatomical landmarks (thorax) for correlation. The fusion requires a periode of approx. 15 min. The examples shown emphasize the high gain in diagnostic information by fusing image data of anatomical and functional methods. (orig.) [de

  7. Fusion of SPECT/TC images: Usefulness and benefits in degenerative spinal cord pathology

    International Nuclear Information System (INIS)

    Ocampo, Monica; Ucros, Gonzalo; Bermudez, Sonia; Morillo, Anibal; Rodriguez, Andres

    2005-01-01

    The objectives are to compare CT and SPECT bone scintigraphy evaluated independently with SPECT-CT fusion images in patients with known degenerative spinal pathology. To demonstrate the clinical usefulness of CT and SPECT fusion images. Materials and methods: Thirty-one patients with suspected degenerative spinal disease were evaluated with thin-slice, non-angled helical CT and bone scintigrams with single photon emission computed tomography (SPECT), both with multiplanar reconstructions within a 24-hour period After independent evaluation by a nuclear medicine specialist and a radiologist, multimodality image fusion software was used to merge the CT and SPECT studies and a final consensus interpretation of the combined images was obtained. Results: Thirty-two SPECT bone scintigraphy images, helical CT studies and SPECT-CT fusion images were obtained for 31 patients with degenerative spinal disease. The results of the bone scintigraphy and CT scans were in agreement in 17 pairs of studies (53.12%). In these studies image fusion did not provide additional information on the location or extension of the lesions. In 11 of the study pairs (34.2%), the information obtained was not in agreement between scintigraphy and CT studies: CT images demonstrated several abnormalities, whereas the SPECT images showed only one dominant lesion, or the SPECT images did not provide enough information for anatomical localization. In these cases image fusion helped establish the precise localization of the most clinically significant lesion, which matched the lesion with the greatest uptake. In 4 studies (12.5%) the CT and SPECT images were not in agreement: CT and SPECT images showed different information (normal scintigraphy, abnormal CT), thus leading to inconclusive fusion images. Conclusion: The use of CT-SPECT fusion images in degenerative spinal disease allows for the integration of anatomic detail with physiologic and functional information. CT-SPECT fusion improves the

  8. Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration

    Science.gov (United States)

    Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis

    2009-01-01

    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657

  9. Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

    Institute of Scientific and Technical Information of China (English)

    Xia; JING; Yan; BAO

    2015-01-01

    Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.

  10. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  11. PET CT imaging: the Philippine experience

    International Nuclear Information System (INIS)

    Santiago, Jonas Y.

    2011-01-01

    Currently, the most discussed fusion imaging is PET CT. Fusion technology has tremendous potential in diagnostic imaging to detect numerous conditions such as tumors, Alzheimer's disease, dementia and neural disorders. The fusion of PET with CT helps in the localization of molecular abnormalities, thereby increasing diagnostic accuracy and differentiating benign or artefact lesions from malignant diseases. It uses a radiotracer called fluro deoxyglucose that gives a clear distinction between pathological and physiological uptake. Interest in this technology is increasing and additional clinical validation are likely to induce more health care providers to invest in combined scanners. It is hope that in time, a better appreciation of its advantages over conventional and traditional imaging modalities will be realized. The first PET CT facility in the country was established at the St. Luke's Medical Center in Quezon City in 2008 and has since then provided a state-of-the art imaging modality to its patients here and those from other countries. The paper will present the experiences so far gained from its operation, including the measures and steps currently taken by the facility to ensure optimum workers and patient safety. Plans and programs to further enhance the awareness of the Filipino public on this advanced imaging modality for an improved health care delivery system may also be discussed briefly. (author)

  12. Data fusion of multi-scale representations for structural damage detection

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-01-01

    Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.

  13. A New Multi-Sensor Track Fusion Architecture for Multi-Sensor Information Integration

    Science.gov (United States)

    2004-09-01

    NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION ...NAME(S) AND ADDRESS(ES) Lockheed Martin Aeronautical Systems Company,Marietta,GA,3063 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...tracking process and degrades the track accuracy. ARCHITECHTURE OF MULTI-SENSOR TRACK FUSION MODEL The Alpha

  14. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging.

    Science.gov (United States)

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R

    2017-11-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.

  15. REMOTE SENSING DATA FUSION TO DETECT ILLICIT CROPS AND UNAUTHORIZED AIRSTRIPS

    OpenAIRE

    Pena, J. A.; Yumin, T.; Liu, H.; Zhao, B.; Garcia, J. A.; Pinto, J.

    2018-01-01

    Remote sensing data fusion has been playing a more and more important role in crop planting area monitoring, especially for crop area information acquisition. Multi-temporal data and multi-spectral time series are two major aspects for improving crop identification accuracy. Remote sensing fusion provides high quality multi-spectral and panchromatic images in terms of spectral and spatial information, respectively. In this paper, we take one step further and prove the application of remote se...

  16. Fusion method of SAR and optical images for urban object extraction

    Science.gov (United States)

    Jia, Yonghong; Blum, Rick S.; Li, Fangfang

    2007-11-01

    A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.

  17. Fourier domain image fusion for differential X-ray phase-contrast breast imaging

    International Nuclear Information System (INIS)

    Coello, Eduardo; Sperl, Jonathan I.; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-01-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well.

  18. Fourier domain image fusion for differential X-ray phase-contrast breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Coello, Eduardo, E-mail: eduardo.coello@tum.de [GE Global Research, Garching (Germany); Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Institut für Informatik, Technische Universität München, Garching (Germany); Sperl, Jonathan I.; Bequé, Dirk [GE Global Research, Garching (Germany); Benz, Tobias [Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality, Institut für Informatik, Technische Universität München, Garching (Germany); Scherer, Kai; Herzen, Julia [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, Garching (Germany); Sztrókay-Gaul, Anikó; Hellerhoff, Karin [Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital, Munich (Germany); Pfeiffer, Franz [Lehrstuhl für Biomedizinische Physik, Physik-Department & Institut für Medizintechnik, Technische Universität München, Garching (Germany); Cozzini, Cristina [GE Global Research, Garching (Germany); Grandl, Susanne [Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital, Munich (Germany)

    2017-04-15

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well.

  19. 3D reconstruction of microvascular flow phantoms with hybrid imaging modalities

    Science.gov (United States)

    Lin, Jingying; Hsiung, Kevin; Ritenour, Russell; Golzarian, Jafar

    2011-03-01

    Microvascular flow phantoms were built to aid the development of a hemodynamic simulation model for treating hepatocelluar carcinoma. The goal is to predict the blood flow routing for embolotherapy planning. Embolization is to deliver agents (e.g. microspheres) to the vicinity of the tumor to obstruct blood supply and nutrients to the tumor, targeting into 30 - 40 μm arterioles. Due to the size of the catheter, it has to release microspheres at an upper stream location, which may not localize the blocking effect. Accurate anatomical descriptions of microvasculature will help to conduct a reliable simulation and prepare a successful embolization strategy. Modern imaging devices can generate 3D reconstructions with ease. However, with a fixed detector size, larger field of view yields lower resolution. Clinical CT images can't be used to measure micro vessel dimensions, while micro-CT requires more acquisitions to reconstruct larger vessels. A multi-tiered, montage 3D reconstruction method with hybrid-modality imagery is devised to minimize the reconstruction effort. Regular CT is used for larger vessels and micro-CT is used for micro vessels. The montage approach aims to stitch up images with different resolutions and orientations. A resolution-adaptable 3D image registration is developed to assemble the images. We have created vessel phantoms that consist of several tiers of bifurcating polymer tubes in reducing diameters, down to 25 μm. No previous work of physical flow phantom has ventured into this small scale. Overlapping phantom images acquired from clinical CT and micro-CT are used to verify the image registration fidelity.

  20. Comparison of imaging modalities for diagnosis of thyroid disorders

    International Nuclear Information System (INIS)

    Pfannenstiel, P.; Hirsch, H.; Stein, N.; Maier, R.; Meindl, S.; Voges, K.; Willmann, L.

    1984-01-01

    From 4-1-1980 to 3-30-1983 in more than 10000 patients scanning of the thyroid was performed by a rectiliniear scanner and predominantly by a gamma-camera using a special collimator and a data processing unit for evaluation of global and regional uptake of sup(99m)TcO 4 or 123 I. The functional scans were compared with the structural information provided by real-time ultrasonography of the thyroid, employing in questionable cases specially developed computer aided pattern recognition methods. Other modalities as a multiwire camera, X-ray fluorescence, CT-imaging, NMR-tomography were also evaluated in selected cases. From the data a diagnostic strategy for the use of imaging modalities to diagnose thyroid diseases was derived. (orig.) [de

  1. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    Science.gov (United States)

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  2. Image registration/fusion software for PET and CT/MRI by using simultaneous emission and transmission scans

    International Nuclear Information System (INIS)

    Kitamura, Keishi; Amano, Masaharu; Sato, Tomohiko; Okumura, Takeshi; Konishi, Norihiro; Komatsu, Masahiko

    2003-01-01

    When PET (positron emission tomography) is used for oncology studies, it is important to register and over-lay PET images with the images of other anatomical modalities, such as those obtained by CT (computed tomography) or MRI (magnetic resonance imaging), in order for the lesions to be anatomically located with high accuracy. The Shimadzu SET-2000W Series PET scanners provide simultaneous acquisition of emission and transmission data, which is capable of complete spatial alignment of both functional and attenuation images. This report describes our newly developed image registration/fusion software, which reformats PET emission images to the CT/MRI grid by using the transform matrix obtained by matching PET transmission images with CT/MRI images. Transmission images are registered and fused either automatically or manually, through 3-dimensional rotation and translation, with the transaxial, sagittal, and coronal fused images being monitored on the screen. This new method permits sufficiently accurate registration and efficient data processing with promoting effective use of CT/MRI images of the DICOM format, without using markers in data acquisition or any special equipment, such as a combined PET/CT scanner. (author)

  3. A Multi-Classification Method of Improved SVM-based Information Fusion for Traffic Parameters Forecasting

    Directory of Open Access Journals (Sweden)

    Hongzhuan Zhao

    2016-04-01

    Full Text Available With the enrichment of perception methods, modern transportation system has many physical objects whose states are influenced by many information factors so that it is a typical Cyber-Physical System (CPS. Thus, the traffic information is generally multi-sourced, heterogeneous and hierarchical. Existing research results show that the multisourced traffic information through accurate classification in the process of information fusion can achieve better parameters forecasting performance. For solving the problem of traffic information accurate classification, via analysing the characteristics of the multi-sourced traffic information and using redefined binary tree to overcome the shortcomings of the original Support Vector Machine (SVM classification in information fusion, a multi-classification method using improved SVM in information fusion for traffic parameters forecasting is proposed. The experiment was conducted to examine the performance of the proposed scheme, and the results reveal that the method can get more accurate and practical outcomes.

  4. Multi-modal MRI analysis with disease-specific spatial filtering: initial testing to predict mild cognitive impairment patients who convert to Alzheimer’s disease

    Directory of Open Access Journals (Sweden)

    Kenichi eOishi

    2011-08-01

    Full Text Available Background: Alterations of the gray and white matter have been identified in Alzheimer’s disease (AD by structural MRI and diffusion tensor imaging (DTI. However, whether the combination of these modalities could increase the diagnostic performance is unknown.Methods: Participants included 19 AD patients, 22 amnestic mild cognitive impairment (aMCI patients, and 22 cognitively normal elderly (NC. The aMCI group was further divided into an aMCI-converter group (converted to AD dementia within three years, and an aMCI-stable group who did not convert in this time period. A T1-weighted image, a T2 map, and a DTI of each participant were normalized, and voxel-based comparisons between AD and NC groups were performed. Regions-of-interest, which defined the areas with significant differences between AD and NC, were created for each modality and named disease-specific spatial filters (DSF. Linear discriminant analysis was used to optimize the combination of multiple MRI measurements extracted by DSF to effectively differentiate AD from NC. The resultant DSF and the discriminant function were applied to the aMCI group to investigate the power to differentiate the aMCI-converters from the aMCI-stable patients. Results: The multi-modal approach with AD-specific filters led to a predictive model with an area under the receiver operating characteristic curve (AUC of 0.93, in differentiating aMCI-converters from aMCI-stable patients. This AUC was better than that of a single-contrast-based approach, such as T1-based morphometry or diffusion anisotropy analysis. Conclusion: The multi-modal approach has the potential to increase the value of MRI in predicting conversion from aMCI to AD.

  5. Dual-modality imaging with a ultrasound-gamma device for oncology

    Science.gov (United States)

    Polito, C.; Pellegrini, R.; Cinti, M. N.; De Vincentis, G.; Lo Meo, S.; Fabbri, A.; Bennati, P.; Cencelli, V. Orsolini; Pani, R.

    2018-06-01

    Recently, dual-modality systems have been developed, aimed to correlate anatomical and functional information, improving disease localization and helping oncological or surgical treatments. Moreover, due to the growing interest in handheld detectors for preclinical trials or small animal imaging, in this work a new dual modality integrated device, based on a Ultrasounds probe and a small Field of View Single Photon Emission gamma camera, is proposed.

  6. Development of technology for medical image fusion

    International Nuclear Information System (INIS)

    Yamaguchi, Takashi; Amano, Daizou

    2012-01-01

    With entry into a field of medical diagnosis in mind, we have developed positron emission tomography (PET) ''MIP-100'' system, of which spatial resolution is far higher than the conventional one, using semiconductor detectors for preclinical imaging for small animals. In response to the recently increasing market demand to fuse functional images by PET and anatomical ones by CT or MRI, we have been developing software to implement image fusion function that enhances marketability of the PET Camera. This paper describes the method of fusing with high accuracy the PET images and anatomical ones by CT system. It also explains that a computer simulation proved the image overlay accuracy to be ±0.3 mm as a result of the development, and that effectiveness of the developed software is confirmed in case of experiment to obtain measured data. Achieving such high accuracy as ±0.3 mm by the software allows us to present fusion images with high resolution (<0.6 mm) without degrading the spatial resolution (<0.5 mm) of the PET system using semiconductor detectors. (author)

  7. Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.

    Science.gov (United States)

    Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni

    2018-03-31

    Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.

  8. Continuous monitoring of arthritis in animal models using optical imaging modalities

    Science.gov (United States)

    Son, Taeyoon; Yoon, Hyung-Ju; Lee, Saseong; Jang, Won Seuk; Jung, Byungjo; Kim, Wan-Uk

    2014-10-01

    Given the several difficulties associated with histology, including difficulty in continuous monitoring, this study aimed to investigate the feasibility of optical imaging modalities-cross-polarization color (CPC) imaging, erythema index (EI) imaging, and laser speckle contrast (LSC) imaging-for continuous evaluation and monitoring of arthritis in animal models. C57BL/6 mice, used for the evaluation of arthritis, were divided into three groups: arthritic mice group (AMG), positive control mice group (PCMG), and negative control mice group (NCMG). Complete Freund's adjuvant, mineral oil, and saline were injected into the footpad for AMG, PCMG, and NCMG, respectively. LSC and CPC images were acquired from 0 through 144 h after injection for all groups. EI images were calculated from CPC images. Variations in feet area, EI, and speckle index for each mice group over time were calculated for quantitative evaluation of arthritis. Histological examinations were performed, and the results were found to be consistent with those from optical imaging analysis. Thus, optical imaging modalities may be successfully applied for continuous evaluation and monitoring of arthritis in animal models.

  9. Multi-dimensional imaging

    CERN Document Server

    Javidi, Bahram; Andres, Pedro

    2014-01-01

    Provides a broad overview of advanced multidimensional imaging systems with contributions from leading researchers in the field Multi-dimensional Imaging takes the reader from the introductory concepts through to the latest applications of these techniques. Split into 3 parts covering 3D image capture, processing, visualization and display, using 1) a Multi-View Approach and 2.) a Holographic Approach, followed by a 3rd part addressing other 3D systems approaches, applications and signal processing for advanced 3D imaging. This book describes recent developments, as well as the prospects and

  10. Role of magnetic resonance urography in pediatric renal fusion anomalies

    International Nuclear Information System (INIS)

    Chan, Sherwin S.; Ntoulia, Aikaterini; Khrichenko, Dmitry; Back, Susan J.; Darge, Kassa; Tasian, Gregory E.; Dillman, Jonathan R.

    2017-01-01

    Renal fusion is on a spectrum of congenital abnormalities that occur due to disruption of the migration process of the embryonic kidneys from the pelvis to the retroperitoneal renal fossae. Clinically, renal fusion anomalies are often found incidentally and associated with increased risk for complications, such as urinary tract obstruction, infection and urolithiasis. These anomalies are most commonly imaged using ultrasound for anatomical definition and less frequently using renal scintigraphy to quantify differential renal function and assess urinary tract drainage. Functional magnetic resonance urography (fMRU) is an advanced imaging technique that combines the excellent soft-tissue contrast of conventional magnetic resonance (MR) images with the quantitative assessment based on contrast medium uptake and excretion kinetics to provide information on renal function and drainage. fMRU has been shown to be clinically useful in evaluating a number of urological conditions. A highly sensitive and radiation-free imaging modality, fMRU can provide detailed morphological and functional information that can facilitate conservative and/or surgical management of children with renal fusion anomalies. This paper reviews the embryological basis of the different types of renal fusion anomalies, their imaging appearances at fMRU, complications associated with fusion anomalies, and the important role of fMRU in diagnosing and managing children with these anomalies. (orig.)

  11. Role of magnetic resonance urography in pediatric renal fusion anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Sherwin S. [Children' s Mercy Hospital, Department of Radiology, Kansas City, MO (United States); Ntoulia, Aikaterini; Khrichenko, Dmitry [The Children' s Hospital of Philadelphia, Division of Body Imaging, Department of Radiology, Philadelphia, PA (United States); Back, Susan J.; Darge, Kassa [The Children' s Hospital of Philadelphia, Division of Body Imaging, Department of Radiology, Philadelphia, PA (United States); University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA (United States); Tasian, Gregory E. [University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA (United States); The Children' s Hospital of Philadelphia, Division of Urology, Department of Surgery, Philadelphia, PA (United States); Dillman, Jonathan R. [Cincinnati Children' s Hospital Medical Center, Division of Thoracoabdominal Imaging, Department of Radiology, Cincinnati, OH (United States)

    2017-12-15

    Renal fusion is on a spectrum of congenital abnormalities that occur due to disruption of the migration process of the embryonic kidneys from the pelvis to the retroperitoneal renal fossae. Clinically, renal fusion anomalies are often found incidentally and associated with increased risk for complications, such as urinary tract obstruction, infection and urolithiasis. These anomalies are most commonly imaged using ultrasound for anatomical definition and less frequently using renal scintigraphy to quantify differential renal function and assess urinary tract drainage. Functional magnetic resonance urography (fMRU) is an advanced imaging technique that combines the excellent soft-tissue contrast of conventional magnetic resonance (MR) images with the quantitative assessment based on contrast medium uptake and excretion kinetics to provide information on renal function and drainage. fMRU has been shown to be clinically useful in evaluating a number of urological conditions. A highly sensitive and radiation-free imaging modality, fMRU can provide detailed morphological and functional information that can facilitate conservative and/or surgical management of children with renal fusion anomalies. This paper reviews the embryological basis of the different types of renal fusion anomalies, their imaging appearances at fMRU, complications associated with fusion anomalies, and the important role of fMRU in diagnosing and managing children with these anomalies. (orig.)

  12. Hand hygiene and healthcare system change within multi-modal promotion: a narrative review.

    Science.gov (United States)

    Allegranzi, B; Sax, H; Pittet, D

    2013-02-01

    Many factors may influence the level of compliance with hand hygiene recommendations by healthcare workers. Lack of products and facilities as well as their inappropriate and non-ergonomic location represent important barriers. Targeted actions aimed at making hand hygiene practices feasible during healthcare delivery by ensuring that the necessary infrastructure is in place, defined as 'system change', are essential to improve hand hygiene in healthcare. In particular, access to alcohol-based hand rubs (AHRs) enables appropriate and timely hand hygiene performance at the point of care. The feasibility and impact of system change within multi-modal strategies have been demonstrated both at institutional level and on a large scale. The introduction of AHRs overcomes some important barriers to best hand hygiene practices and is associated with higher compliance, especially when integrated within multi-modal strategies. Several studies demonstrated the association between AHR consumption and reduction in healthcare-associated infection, in particular, meticillin-resistant Staphylococcus aureus bacteraemia. Recent reports demonstrate the feasibility and success of system change implementation on a large scale. The World Health Organization and other investigators have reported the challenges and encouraging results of implementing hand hygiene improvement strategies, including AHR introduction, in settings with limited resources. This review summarizes the available evidence demonstrating the need for system change and its importance within multi-modal hand hygiene improvement strategies. This topic is also discussed in a global perspective and highlights some controversial issues. Copyright © 2013 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  13. Value of image fusion using single photon emission computed tomography with integrated low dose computed tomography in comparison with a retrospective voxel-based method in neuroendocrine tumours

    International Nuclear Information System (INIS)

    Amthauer, H.; Denecke, T.; Ruf, J.; Gutberlet, M.; Felix, R.; Lemke, A.J.; Rohlfing, T.; Boehmig, M.; Ploeckinger, U.

    2005-01-01

    The objective was the evaluation of single photon emission computed tomography (SPECT) with integrated low dose computed tomography (CT) in comparison with a retrospective fusion of SPECT and high-resolution CT and a side-by-side analysis for lesion localisation in patients with neuroendocrine tumours. Twenty-seven patients were examined by multidetector CT. Additionally, as part of somatostatin receptor scintigraphy (SRS), an integrated SPECT-CT was performed. SPECT and CT data were fused using software with a registration algorithm based on normalised mutual information. The reliability of the topographic assignment of lesions in SPECT-CT, retrospective fusion and side-by-side analysis was evaluated by two blinded readers. Two patients were not enrolled in the final analysis because of misregistrations in the retrospective fusion. Eighty-seven foci were included in the analysis. For the anatomical assignment of foci, SPECT-CT and retrospective fusion revealed overall accuracies of 91 and 94% (side-by-side analysis 86%). The correct identification of foci as lymph node manifestations (n=25) was more accurate by retrospective fusion (88%) than from SPECT-CT images (76%) or by side-by-side analysis (60%). Both modalities of image fusion appear to be well suited for the localisation of SRS foci and are superior to side-by-side analysis of non-fused images especially concerning lymph node manifestations. (orig.)

  14. Adaptive polarization image fusion based on regional energy dynamic weighted average

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-qiang; PAN Quan; ZHANG Hong-cai

    2005-01-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  15. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  16. LapTrain: multi-modality training curriculum for laparoscopic cholecystectomy-results of a randomized controlled trial.

    Science.gov (United States)

    Kowalewski, K F; Garrow, C R; Proctor, T; Preukschas, A A; Friedrich, M; Müller, P C; Kenngott, H G; Fischer, L; Müller-Stich, B P; Nickel, F

    2018-02-12

    Multiple training modalities for laparoscopy have different advantages, but little research has been conducted on the benefit of a training program that includes multiple different training methods compared to one method only. This study aimed to evaluate benefits of a combined multi-modality training program for surgical residents. Laparoscopic cholecystectomy (LC) was performed on a porcine liver as the pre-test. Randomization was stratified for experience to the multi-modality Training group (12 h of training on Virtual Reality (VR) and box trainer) or Control group (no training). The post-test consisted of a VR LC and porcine LC. Performance was rated with the Global Operative Assessment of Laparoscopic Skills (GOALS) score by blinded experts. Training (n = 33) and Control (n = 31) were similar in the pre-test (GOALS: 13.7 ± 3.4 vs. 14.7 ± 2.6; p = 0.198; operation time 57.0 ± 18.1 vs. 63.4 ± 17.5 min; p = 0.191). In the post-test porcine LC, Training had improved GOALS scores (+ 2.84 ± 2.85 points, p < 0.001), while Control did not (+ 0.55 ± 2.34 points, p = 0.154). Operation time in the post-test was shorter for Training vs. Control (40.0 ± 17.0 vs. 55.0 ± 22.2 min; p = 0.012). Junior residents improved GOALS scores to the level of senior residents (pre-test: 13.7 ± 2.7 vs. 18.3 ± 2.9; p = 0.010; post-test: 15.5 ± 3.4 vs. 18.8 ± 3.8; p = 0.120) but senior residents remained faster (50.1 ± 20.6 vs. 25.0 ± 1.9 min; p < 0.001). No differences were found between groups on the post-test VR trainer. Structured multi-modality training is beneficial for novices to improve basics and overcome the initial learning curve in laparoscopy as well as to decrease operation time for LCs in different stages of experience. Future studies should evaluate multi-modality training in comparison with single modalities. German Clinical Trials Register DRKS00011040.

  17. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  18. Application of principal component analysis and information fusion technique to detect hotspots in NOAA/AVHRR images of Jharia coalfield, India - article no. 013523

    Energy Technology Data Exchange (ETDEWEB)

    Gautam, R.S.; Singh, D.; Mittal, A. [Indian Institute of Technology Roorkee, Roorkee (India)

    2007-07-01

    Present paper proposes an algorithm for hotspot (sub-surface fire) detection in NOAA/AVHRR images in Jharia region of India by employing Principal Component Analysis (PCA) and fusion technique. Proposed technique is very simple to implement and is more adaptive in comparison to thresholding, multi-thresholding and contextual algorithms. The algorithm takes into account the information of AVHRR channels 1, 2, 3, 4 and vegetation indices NDVI and MSAVI for the required purpose. Proposed technique consists of three steps: (1) detection and removal of cloud and water pixels from preprocessed AVHRR image and screening out the noise of channel 3, (2) application of PCA on multi-channel information along with vegetation index information of NOAA/AVHRR image to obtain principal components, and (3) fusion of information obtained from principal component 1 and 2 to classify image pixels as hotspots. Image processing techniques are applied to fuse information in first two principal component images and no absolute threshold is incorporated to specify whether particular pixel belongs to hotspot class or not, hence, proposed method is adaptive in nature and works successfully for most of the AVHRR images with average 87.27% detection accuracy and 0.201% false alarm rate while comparing with ground truth points in Jharia region of India.

  19. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  20. Multimodality cranial image fusion using external markers applied via a vacuum mouthpiece and a case report

    International Nuclear Information System (INIS)

    Sweeney, R.A.; Seydl, K.; Lukas, P.; Bale, R.J.; Trieb, T.; Moncayo, R.; Donnemiller, E.; Eisner, W.; Burtscher, J.; Stockhammer, G.

    2003-01-01

    Purpose: To present a simple and precise method of combining functional information of cranial SPECT and PET images with CT and MRI, in any combination. Material and Methods: Imaging is performed with a hockey mask-like reference frame with image modality-specific markers in precisely defined positions. This frame is reproducibly connected to the VBH vacuum mouthpiece, granting objectively identical repositioning of the frame with respect to the cranium. Using these markers, the desired 3-D imaging modalities can then be manually or automatically registered. This information can be used for diagnosis, treatment planning, and evaluation of follow-up, while the same vacuum mouthpiece allows precisely reproducible stereotactic head fixation during radiotherapy. Results: 244 CT and MR data sets of 49 patients were registered to a root square mean error (RSME) of 0.9 mm (mean). 64 SPECT-CT fusions on 18 of these patients gave an RMSE of 1.4 mm, and 40 PET-CT data sets of eight patients were registered to 1.3 mm. An example of the method is given by means of a case report of a 52-year-old patient with bilateral optic nerve meningioma. Conclusion: This technique is a simple, objective and accurate registration tool to combine diagnosis, treatment planning, treatment, and follow-up, all via an individualized vacuum mouthpiece. Especially for low-resolution PET and even more so for some very diffuse SPECT data sets, activity can now be accurately correlated to anatomic structures. (orig.)

  1. Remote Sensing Data Fusion to Detect Illicit Crops and Unauthorized Airstrips

    Science.gov (United States)

    Pena, J. A.; Yumin, T.; Liu, H.; Zhao, B.; Garcia, J. A.; Pinto, J.

    2018-04-01

    Remote sensing data fusion has been playing a more and more important role in crop planting area monitoring, especially for crop area information acquisition. Multi-temporal data and multi-spectral time series are two major aspects for improving crop identification accuracy. Remote sensing fusion provides high quality multi-spectral and panchromatic images in terms of spectral and spatial information, respectively. In this paper, we take one step further and prove the application of remote sensing data fusion in detecting illicit crop through LSMM, GOBIA, and MCE analyzing of strategic information. This methodology emerges as a complementary and effective strategy to control and eradicate illicit crops.

  2. Multi-channel motor evoked potential monitoring during anterior cervical discectomy and fusion

    Directory of Open Access Journals (Sweden)

    Dong-Gun Kim

    Full Text Available Objectives: Anterior cervical discectomy and fusion (ACDF surgery is the most common surgical procedure for the cervical spine with low complication rate. Despite the potential prognostic benefit, intraoperative neurophysiological monitoring (IONM, a method for detecting impending neurological compromise, is not routinely used in ACDF surgery. The present study aimed to identify the potential benefits of monitoring multi-channel motor evoked potentials (MEPs during ACDF surgery. Methods: We retrospectively reviewed 200 consecutive patients who received IONM with multi-channel MEPs and somatosensory evoked potentials (SSEPs. On average, 9.2 muscles per patient were evaluated under MEP monitoring. Results: The rate of MEP change during surgery in the multi-level ACDF group was significantly higher than the single-level group. Two patients from the single-level ACDF group (1.7% and four patients from the multi-level ACDF group (4.9% experienced post-operative motor deficits. Multi-channel MEPs monitoring during single and multi-level ACDF surgery demonstrated higher sensitivity, specificity, positive predictive and negative predictive value than SSEP monitoring. Conclusions: Multi-channel MEP monitoring might be beneficial for the detection of segmental injury as well as long tract injury during single- and multi-level ACDF surgery. Significance: This is first large scale study to identify the usefulness of multi-channel MEPs in monitoring ACDF surgery. Keywords: Disc disease, Somatosensory evoked potentials, Intraoperative neurophysiological monitoring, Motor evoked potentials, Anterior cervical discectomy and fusion

  3. Prediction of Quadcopter State through Multi-Microphone Side-Channel Fusion

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Garg, Kashish; Kim, Munsung; Li, Jonathan; Volk, Anja; Franchetti, Franz

    Improving trust in the state of Cyber-Physical Systems becomes increasingly important as more tasks become autonomous. We present a multi-microphone machine learning fusion approach to accurately predict complex states of a quadcopter drone in flight from the sound it makes using audio content

  4. RadMAP: The Radiological Multi-sensor Analysis Platform

    International Nuclear Information System (INIS)

    Bandstra, Mark S.; Aucott, Timothy J.; Brubaker, Erik; Chivers, Daniel H.; Cooper, Reynold J.; Curtis, Joseph C.; Davis, John R.; Joshi, Tenzing H.; Kua, John; Meyer, Ross; Negut, Victor; Quinlan, Michael; Quiter, Brian J.; Srinivasan, Shreyas; Zakhor, Avideh; Zhang, Richard; Vetter, Kai

    2016-01-01

    The variability of gamma-ray and neutron background during the operation of a mobile detector system greatly limits the ability of the system to detect weak radiological and nuclear threats. The natural radiation background measured by a mobile detector system is the result of many factors, including the radioactivity of nearby materials, the geometric configuration of those materials and the system, the presence of absorbing materials, and atmospheric conditions. Background variations tend to be highly non-Poissonian, making it difficult to set robust detection thresholds using knowledge of the mean background rate alone. The Radiological Multi-sensor Analysis Platform (RadMAP) system is designed to allow the systematic study of natural radiological background variations and to serve as a development platform for emerging concepts in mobile radiation detection and imaging. To do this, RadMAP has been used to acquire extensive, systematic background measurements and correlated contextual data that can be used to test algorithms and detector modalities at low false alarm rates. By combining gamma-ray and neutron detector systems with data from contextual sensors, the system enables the fusion of data from multiple sensors into novel data products. The data are curated in a common format that allows for rapid querying across all sensors, creating detailed multi-sensor datasets that are used to study correlations between radiological and contextual data, and develop and test novel techniques in mobile detection and imaging. In this paper we will describe the instruments that comprise the RadMAP system, the effort to curate and provide access to multi-sensor data, and some initial results on the fusion of contextual and radiological data.

  5. RadMAP: The Radiological Multi-sensor Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Bandstra, Mark S., E-mail: msbandstra@lbl.gov [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Aucott, Timothy J. [Department of Nuclear Engineering, University of California Berkeley, CA (United States); Brubaker, Erik [Sandia National Laboratory, Livermore, CA (United States); Chivers, Daniel H.; Cooper, Reynold J. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Curtis, Joseph C. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Department of Nuclear Engineering, University of California Berkeley, CA (United States); Davis, John R. [Department of Nuclear Engineering, University of California Berkeley, CA (United States); Joshi, Tenzing H.; Kua, John; Meyer, Ross; Negut, Victor; Quinlan, Michael; Quiter, Brian J. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Srinivasan, Shreyas [Department of Nuclear Engineering, University of California Berkeley, CA (United States); Department of Electrical Engineering and Computer Science, University of California Berkeley, CA (United States); Zakhor, Avideh; Zhang, Richard [Department of Electrical Engineering and Computer Science, University of California Berkeley, CA (United States); Vetter, Kai [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Department of Nuclear Engineering, University of California Berkeley, CA (United States)

    2016-12-21

    The variability of gamma-ray and neutron background during the operation of a mobile detector system greatly limits the ability of the system to detect weak radiological and nuclear threats. The natural radiation background measured by a mobile detector system is the result of many factors, including the radioactivity of nearby materials, the geometric configuration of those materials and the system, the presence of absorbing materials, and atmospheric conditions. Background variations tend to be highly non-Poissonian, making it difficult to set robust detection thresholds using knowledge of the mean background rate alone. The Radiological Multi-sensor Analysis Platform (RadMAP) system is designed to allow the systematic study of natural radiological background variations and to serve as a development platform for emerging concepts in mobile radiation detection and imaging. To do this, RadMAP has been used to acquire extensive, systematic background measurements and correlated contextual data that can be used to test algorithms and detector modalities at low false alarm rates. By combining gamma-ray and neutron detector systems with data from contextual sensors, the system enables the fusion of data from multiple sensors into novel data products. The data are curated in a common format that allows for rapid querying across all sensors, creating detailed multi-sensor datasets that are used to study correlations between radiological and contextual data, and develop and test novel techniques in mobile detection and imaging. In this paper we will describe the instruments that comprise the RadMAP system, the effort to curate and provide access to multi-sensor data, and some initial results on the fusion of contextual and radiological data.

  6. Comparing Image Perception of Bladder Tumors in Four Different Storz Professional Image Enhancement System Modalities Using the íSPIES App

    NARCIS (Netherlands)

    Kamphuis, Guido M.; de Bruin, D. Martijn; Brandt, Martin J.; Knoll, Thomas; Conort, Pierre; Lapini, Alberto; Dominguez-Escrig, Jose L.; de La Rosette, Jean J. M. C. H.

    2016-01-01

    To evaluate the variation of interpretation of the same bladder urothelium image in different Storz Professional Image Enhancement System (SPIES) modalities. SPIES contains a White light (WL), Spectra A (SA), Spectra B (SB), and Clara and Chroma combined (CC) modality. An App for the iPAD retina was

  7. A method based on IHS cylindrical transform model for quality assessment of image fusion

    Science.gov (United States)

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  8. Numerical analysis of modal tomography for solar multi-conjugate adaptive optics

    International Nuclear Information System (INIS)

    Dong Bing; Ren Deqing; Zhang Xi

    2012-01-01

    Multi-conjugate adaptive optics (MCAO) can considerably extend the corrected field of view with respect to classical adaptive optics, which will benefit solar observation in many aspects. In solar MCAO, the Sun structure is utilized to provide multiple guide stars and a modal tomography approach is adopted to implement three-dimensional wavefront restorations. The principle of modal tomography is briefly reviewed and a numerical simulation model is built with three equivalent turbulent layers and a different number of guide stars. Our simulation results show that at least six guide stars are required for an accurate wavefront reconstruction in the case of three layers, and only three guide stars are needed in the two layer case. Finally, eigenmode analysis results are given to reveal the singular modes that cannot be precisely retrieved in the tomography process.

  9. Fusion of multispectral and panchromatic images using multirate filter banks

    Institute of Scientific and Technical Information of China (English)

    Wang Hong; Jing Zhongliang; Li Jianxun

    2005-01-01

    In this paper, an image fusion method based on the filter banks is proposed for merging a high-resolution panchromatic image and a low-resolution multispectral image. Firstly, the filter banks are designed to merge different signals with minimum distortion by using cosine modulation. Then, the filter banks-based image fusion is adopted to obtain a high-resolution multispectral image that combines the spectral characteristic of low-resolution data with the spatial resolution of the panchromatic image. Finally, two different experiments and corresponding performance analysis are presented. Experimental results indicate that the proposed approach outperforms the HIS transform, discrete wavelet transform and discrete wavelet frame.

  10. PET/CT image registration: Preliminary tests for its application to clinical dosimetry in radiotherapy

    International Nuclear Information System (INIS)

    Banos-Capilla, M. C.; Garcia, M. A.; Bea, J.; Pla, C.; Larrea, L.; Lopez, E.

    2007-01-01

    The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: vertical bar Δx vertical bar ±σ=3.3 mm±1.0 mm and vertical bar Δy vertical bar ±σ=3.6 mm±1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: vertical bar Δx vertical bar ±σ=0.7 mm±0.8 mm and vertical bar Δy vertical bar ±σ=0.3 mm±1.7 mm. We also noted that differences found for each of the fusion methods were similar for

  11. Advances in fusion of PET, SPET, CT und MRT images

    International Nuclear Information System (INIS)

    Pietrzyk, U.

    2003-01-01

    Image fusion as part of the correlative analysis for medical images has gained ever more interest and the fact that combined systems for PET and CT are commercially available demonstrates the importance for medical diagnostics, therapy and research oriented applications. In this work the basics of image registration, its different strategies and the mathematical and physical background are described. A successful image registration is an essential prerequisite for the next steps, namely correlative medical image analysis. Means to verify image registration and the different modes for integrated display are presented and its usefulness is discussed. Possible limitations in applying image fusion in order to avoid misinterpretation will be pointed out. (orig.) [de

  12. FZUImageReg: A toolbox for medical image registration and dose fusion in cervical cancer radiotherapy.

    Directory of Open Access Journals (Sweden)

    Qinquan Gao

    Full Text Available The combination external-beam radiotherapy and high-dose-rate brachytherapy is a standard form of treatment for patients with locally advanced uterine cervical cancer. Personalized radiotherapy in cervical cancer requires efficient and accurate dose planning and assessment across these types of treatment. To achieve radiation dose assessment, accurate mapping of the dose distribution from HDR-BT onto EBRT is extremely important. However, few systems can achieve robust dose fusion and determine the accumulated dose distribution during the entire course of treatment. We have therefore developed a toolbox (FZUImageReg, which is a user-friendly dose fusion system based on hybrid image registration for radiation dose assessment in cervical cancer radiotherapy. The main part of the software consists of a collection of medical image registration algorithms and a modular design with a user-friendly interface, which allows users to quickly configure, test, monitor, and compare different registration methods for a specific application. Owing to the large deformation, the direct application of conventional state-of-the-art image registration methods is not sufficient for the accurate alignment of EBRT and HDR-BT images. To solve this problem, a multi-phase non-rigid registration method using local landmark-based free-form deformation is proposed for locally large deformation between EBRT and HDR-BT images, followed by intensity-based free-form deformation. With the transformation, the software also provides a dose mapping function according to the deformation field. The total dose distribution during the entire course of treatment can then be presented. Experimental results clearly show that the proposed system can achieve accurate registration between EBRT and HDR-BT images and provide radiation dose warping and fusion results for dose assessment in cervical cancer radiotherapy in terms of high accuracy and efficiency.

  13. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    Science.gov (United States)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  14. Multi Detector Computed Tomography Fistulography In Patients of Fistula-in-Ano: An Imaging Collage.

    Science.gov (United States)

    Bhatt, Shuchi; Jain, Bhupendra Kumar; Singh, Vikas Kumar

    2017-01-01

    Fistula-in-ano, or perianal fistula, is a challenging clinical condition for both diagnosis and treatment. Imaging modalities such as fistulography, anal endosonography, perineal sonography, magnetic resonance imaging (MRI), and computed tomography (CT) are available for its evaluation. MRI is considered as the modality of choice for an accurate delineation of the tract in relation to the sphincter complex and for the detection of associated complications. However, its availability and affordability is always an issue. Moreover, the requirement to obtain multiple sequences to depict the fistula in detail is cumbersome and confusing for the clinicians to interpret. The inability to show the fistula in relation to normal anatomical structures in a single image is also a limitation. Multi detector computed tomography fistulography ( MDCTF ) is an underutilized technique for defining perianal fistulas. Acquisition of iso-volumetric data sets with instillation of contrast into the fistula delineates the tract and its components. Post-processing with thin sections allows for a generation of good quality images for presentation in various planes (multi-planar reconstructions) and formats (volume rendered technique, maximum intensity projection). MDCTF demonstrates the type of fistula, its extent, whether it is simple or complex, and shows the site of internal opening and associated complications; all in easy to understand images that can be used by the surgeons. Its capability to represent the entire pathology in relation to normal anatomical structures in few images is a definite advantage. MDCTF can be utilized when MRI is contraindicated or not feasible. This pictorial review shares our initial experience with MDCT fistulography in evaluating fistula-in-ano, demonstrates various components of fistulas, and discusses the types of fistulas according to the standard Parks classification.

  15. Time-resolved computed tomography of the liver: retrospective, multi-phase image reconstruction derived from volumetric perfusion imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Michael A.; Kartalis, Nikolaos; Aspelin, Peter; Albiin, Nils; Brismar, Torkel B. [Karolinska University Hospital, Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm (Sweden); Leidner, Bertil; Svensson, Anders [Karolinska University Hospital, Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm (Sweden); Karolinska University Hospital Huddinge, Department of Radiology, Stockholm (Sweden)

    2014-01-15

    To assess feasibility and image quality (IQ) of a new post-processing algorithm for retrospective extraction of an optimised multi-phase CT (time-resolved CT) of the liver from volumetric perfusion imaging. Sixteen patients underwent clinically indicated perfusion CT using 4D spiral mode of dual-source 128-slice CT. Three image sets were reconstructed: motion-corrected and noise-reduced (MCNR) images derived from 4D raw data; maximum and average intensity projections (time MIP/AVG) of the arterial/portal/portal-venous phases and all phases (total MIP/ AVG) derived from retrospective fusion of dedicated MCNR split series. Two readers assessed the IQ, detection rate and evaluation time; one reader assessed image noise and lesion-to-liver contrast. Time-resolved CT was feasible in all patients. Each post-processing step yielded a significant reduction of image noise and evaluation time, maintaining lesion-to-liver contrast. Time MIPs/AVGs showed the highest overall IQ without relevant motion artefacts and best depiction of arterial and portal/portal-venous phases respectively. Time MIPs demonstrated a significantly higher detection rate for arterialised liver lesions than total MIPs/AVGs and the raw data series. Time-resolved CT allows data from volumetric perfusion imaging to be condensed into an optimised multi-phase liver CT, yielding a superior IQ and higher detection rate for arterialised liver lesions than the raw data series. (orig.)

  16. Data fusion of Landsat TM and IRS images in forest classification

    Science.gov (United States)

    Guangxing Wang; Markus Holopainen; Eero Lukkarinen

    2000-01-01

    Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...

  17. Perspectives on How Human Simultaneous Multi-Modal Imaging Adds Directionality to Spread Models of Alzheimer’s Disease

    Directory of Open Access Journals (Sweden)

    Julia Neitzel

    2018-01-01

    Full Text Available Previous animal research suggests that the spread of pathological agents in Alzheimer’s disease (AD follows the direction of signaling pathways. Specifically, tau pathology has been suggested to propagate in an infection-like mode along axons, from transentorhinal cortices to medial temporal lobe cortices and consequently to other cortical regions, while amyloid-beta (Aβ pathology seems to spread in an activity-dependent manner among and from isocortical regions into limbic and then subcortical regions. These directed connectivity-based spread models, however, have not been tested directly in AD patients due to the lack of an in vivo method to identify directed connectivity in humans. Recently, a new method—metabolic connectivity mapping (MCM—has been developed and validated in healthy participants that uses simultaneous FDG-PET and resting-state fMRI data acquisition to identify directed intrinsic effective connectivity (EC. To this end, postsynaptic energy consumption (FDG-PET is used to identify regions with afferent input from other functionally connected brain regions (resting-state fMRI. Here, we discuss how this multi-modal imaging approach allows quantitative, whole-brain mapping of signaling direction in AD patients, thereby pointing out some of the advantages it offers compared to other EC methods (i.e., Granger causality, dynamic causal modeling, Bayesian networks. Most importantly, MCM provides the basis on which models of pathology spread, derived from animal studies, can be tested in AD patients. In particular, future work should investigate whether tau and Aβ in humans propagate along the trajectories of directed connectivity in order to advance our understanding of the neuropathological mechanisms causing disease progression.

  18. Improved detection probability of low level light and infrared image fusion system

    Science.gov (United States)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  19. A big-data model for multi-modal public transportation with application to macroscopic control and optimisation

    Science.gov (United States)

    Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert

    2015-11-01

    This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.

  20. Color image guided depth image super resolution using fusion filter

    Science.gov (United States)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  1. CONTEXT-CAPTURE MULTI-VALUED DECISION FUSION WITH FAULT TOLERANT CAPABILITY FOR WIRELESS SENSOR NETWORKS

    OpenAIRE

    Jun Wu; Shigeru Shimamoto

    2011-01-01

    Wireless sensor networks (WSNs) are usually utilized to perform decision fusion of event detection. Current decision fusion schemes are based on binary valued decision and do not consider bursty contextcapture. However, bursty context and multi-valued data are important characteristics of WSNs. One on hand, the local decisions from sensors usually have bursty and contextual characteristics. Fusion center must capture the bursty context information from the sensors. On the other hand, in pract...

  2. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  3. Percutaneous Thermal Ablation with Ultrasound Guidance. Fusion Imaging Guidance to Improve Conspicuity of Liver Metastasis

    Energy Technology Data Exchange (ETDEWEB)

    Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros; Deschamps, Frederic [Gustave Roussy - Cancer Campus, Interventional Radiology Department (France); Petrover, David [Imagerie Médicale Paris Centre, IMPC (France); Baere, Thierry De [Gustave Roussy - Cancer Campus, Interventional Radiology Department (France)

    2017-05-15

    PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time required for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.

  4. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    Science.gov (United States)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  5. Initial clinical assessment of CT-MRI image fusion software in localization of the prostate for 3D conformal radiation therapy

    International Nuclear Information System (INIS)

    Kagawa, Kazufumi; Lee, W. Robert; Schultheiss, Timothy E.; Hunt, Margie A.; Shaer, Andrew H.; Hanks, Gerald E.

    1997-01-01

    Purpose: To assess the utility of image fusion software and compare MRI prostate localization with CT localization in patients undergoing 3D conformal radiation therapy of prostate cancer. Materials and Methods: After a phantom study was performed to ensure the accuracy of image fusion procedure, 22 prostate cancer patients had CT and MRI studies before the start of radiotherapy. Immobilization casts used during radiation treatment were also used for both imaging studies. After the clinical target volume (CTV) (prostate or prostate + seminal vesicles) was defined on CT, slices from the MRI study were reconstructed to precisely match the CT slices by identifying three common bony landmarks on each study. The CTV was separately defined on the matched MRI slices. Data related to the size and location of the prostate were compared between CT and MRI. The spatial relationship between the tip of urethrogram cone on CT and prostate apex seen on MRI was also estimated. Results: The phantom study showed the registration discrepancies between CT and MRI smaller than 1.0 mm in any pair in comparison. The patient study showed a mean image registration error of 0.9 (± 0.6) mm. The average prostate volume was 63.0 (± 25.8) cm 3 and 50.9 (± 22.9) cm 3 determined by CT and MRI, respectively. The difference in prostate location with the two studies usually differed at the base and at the apex of the prostate. On the transverse MRI, the prostate apex was situated 7.1 (± 4.5) mm dorsal and 15.1 (± 4.0) mm cephalad to the tip of urethrogram cone. Conclusions: CT-MRI image fusion study made it possible to compare the two modalities directly. MRI localization of the prostate is more accurate than CT, and indicates the distance from cone to apex is 15 mm. CT-MRI image fusion technique provides valuable supplements to CT technology for more precise targeting of the prostate cancer

  6. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    Science.gov (United States)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  7. A Multi-Modality Deep Network for Cold-Start Recommendation

    Directory of Open Access Journals (Sweden)

    Mingxuan Sun

    2018-03-01

    Full Text Available Collaborative filtering (CF approaches, which provide recommendations based on ratings or purchase history, perform well for users and items with sufficient interactions. However, CF approaches suffer from the cold-start problem for users and items with few ratings. Hybrid recommender systems that combine collaborative filtering and content-based approaches have been proved as an effective way to alleviate the cold-start issue. Integrating contents from multiple heterogeneous data sources such as reviews and product images is challenging for two reasons. Firstly, mapping contents in different modalities from the original feature space to a joint lower-dimensional space is difficult since they have intrinsically different characteristics and statistical properties, such as sparse texts and dense images. Secondly, most algorithms only use content features as the prior knowledge to improve the estimation of user and item profiles but the ratings do not directly provide feedback to guide feature extraction. To tackle these challenges, we propose a tightly-coupled deep network model for fusing heterogeneous modalities, to avoid tedious feature extraction in specific domains, and to enable two-way information propagation from both content and rating information. Experiments on large-scale Amazon product data in book and movie domains demonstrate the effectiveness of the proposed model for cold-start recommendation.

  8. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    Science.gov (United States)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  9. (In)Flexibility of Constituency in Japanese in Multi-Modal Categorial Grammar with Structured Phonology

    Science.gov (United States)

    Kubota, Yusuke

    2010-01-01

    This dissertation proposes a theory of categorial grammar called Multi-Modal Categorial Grammar with Structured Phonology. The central feature that distinguishes this theory from the majority of contemporary syntactic theories is that it decouples (without completely segregating) two aspects of syntax--hierarchical organization (reflecting…

  10. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    Science.gov (United States)

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  11. NeMO-Net & Fluid Lensing: The Neural Multi-Modal Observation & Training Network for Global Coral Reef Assessment Using Fluid Lensing Augmentation of NASA EOS Data

    Science.gov (United States)

    Chirayath, Ved

    2018-01-01

    We present preliminary results from NASA NeMO-Net, the first neural multi-modal observation and training network for global coral reef assessment. NeMO-Net is an open-source deep convolutional neural network (CNN) and interactive active learning training software in development which will assess the present and past dynamics of coral reef ecosystems. NeMO-Net exploits active learning and data fusion of mm-scale remotely sensed 3D images of coral reefs captured using fluid lensing with the NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as hyperspectral airborne remote sensing data from the ongoing NASA CORAL mission and lower-resolution satellite data to determine coral reef ecosystem makeup globally at unprecedented spatial and temporal scales. Aquatic ecosystems, particularly coral reefs, remain quantitatively misrepresented by low-resolution remote sensing as a result of refractive distortion from ocean waves, optical attenuation, and remoteness. Machine learning classification of coral reefs using FluidCam mm-scale 3D data show that present satellite and airborne remote sensing techniques poorly characterize coral reef percent living cover, morphology type, and species breakdown at the mm, cm, and meter scales. Indeed, current global assessments of coral reef cover and morphology classification based on km-scale satellite data alone can suffer from segmentation errors greater than 40%, capable of change detection only on yearly temporal scales and decameter spatial scales, significantly hindering our understanding of patterns and processes in marine biodiversity at a time when these ecosystems are experiencing unprecedented anthropogenic pressures, ocean acidification, and sea surface temperature rise. NeMO-Net leverages our augmented machine learning algorithm that demonstrates data fusion of regional FluidCam (mm, cm-scale) airborne remote sensing with

  12. X-ray imaging in the laser-fusion program

    International Nuclear Information System (INIS)

    McCall, G.H.

    1977-01-01

    Imaging devices which are used or planned for x-ray imaging in the laser-fusion program are discussed. Resolution criteria are explained, and a suggestion is made for using the modulation transfer function as a uniform definition of resolution for these devices

  13. Feature level fusion of hand and face biometrics

    Science.gov (United States)

    Ross, Arun A.; Govindarajan, Rohin

    2005-03-01

    Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.

  14. Game of Objects: vicarious causation and multi-modal media

    Directory of Open Access Journals (Sweden)

    Aaron Pedinotti

    2013-09-01

    Full Text Available This paper applies philosopher Graham Harman's object-oriented theory of "vicarious causation" to an analysis of the multi-modal media phenomenon known as "Game of Thrones." Examining the manner in which George R.R. Martin's best-selling series of fantasy novels has been adapted into a board game, a video game, and a hit HBO television series, it uses the changes entailed by these processes to trace the contours of vicariously generative relations. In the course of the resulting analysis, it provides new suggestions concerning the eidetic dimensions of Harman's causal model, particularly with regard to causation in linear networks and in differing types of game systems.

  15. A multimodal image sensor system for identifying water stress in grapevines

    Science.gov (United States)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  16. Diagnosis and Characterization of Patellofemoral Instability: Review of Available Imaging Modalities.

    Science.gov (United States)

    Haj-Mirzaian, Arya; Thawait, Gaurav K; Tanaka, Miho J; Demehri, Shadpour

    2017-06-01

    Patellofemoral instability (PI) is defined as single or multiple episodes of patellar dislocation. Imaging modalities are useful for characterization of patellar malalignment, maltracking, underlying morphologic abnormalities, and stabilizing soft-tissue injuries. Using these findings, orthopedic surgeons can decide when to operate, determine the best operation, and measure degree of correction postoperatively in PI patients. Also, these methods assist with PI diagnosis in some suspicious cases. Magnetic resonance imaging is the preferred method especially in the setting of acute dislocations. Multidetector computed tomography allows a more accurate assessment for malalignment such as patellar tilt and lateral subluxation and secondary osteoarthritis. Dynamic magnetic resonance imaging and 4-dimensional computed tomography have been introduced for better kinematic assessment of the patellofemoral maltracking during extension-flexion motions. In this review article, we will discuss the currently available evidence regarding both the conventional and the novel imaging modalities that can be used for diagnosis and characterization of PI.

  17. The effectiveness of multi modal representation text books to improve student's scientific literacy of senior high school students

    Science.gov (United States)

    Zakiya, Hanifah; Sinaga, Parlindungan; Hamidah, Ida

    2017-05-01

    The results of field studies showed the ability of science literacy of students was still low. One root of the problem lies in the books used in learning is not oriented toward science literacy component. This study focused on the effectiveness of the use of textbook-oriented provisioning capability science literacy by using multi modal representation. The text books development method used Design Representational Approach Learning to Write (DRALW). Textbook design which was applied to the topic of "Kinetic Theory of Gases" is implemented in XI grade students of high school learning. Effectiveness is determined by consideration of the effect and the normalized percentage gain value, while the hypothesis was tested using Independent T-test. The results showed that the textbooks which were developed using multi-mode representation science can improve the literacy skills of students. Based on the size of the effect size textbooks developed with representation multi modal was found effective in improving students' science literacy skills. The improvement was occurred in all the competence and knowledge of scientific literacy. The hypothesis testing showed that there was a significant difference on the ability of science literacy between class that uses textbooks with multi modal representation and the class that uses the regular textbook used in schools.

  18. Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.

    Science.gov (United States)

    Gibson, Alison; Artemiadis, Panagiotis

    2014-01-01

    As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.

  19. 3D Image Fusion to Localise Intercostal Arteries During TEVAR

    Directory of Open Access Journals (Sweden)

    G. Koutouzi

    Full Text Available Purpose: Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA, but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR. Technique: The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT, the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. Results: 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. Conclusion: 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia. Keywords: TEVAR, Intercostal artery, Spinal cord ischaemia, 3D image fusion, Image guidance, Cone-beam CT

  20. Performance comparison of different graylevel image fusion schemes through a universal image quality index

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2003-01-01

    We applied a recently introduced universal image quality index Q that quantifies the distortion of a processed image relative to its original version, to assess the performance of different graylevel image fusion schemes. The method is as follows. First, we adopt an original test image as the

  1. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    Directory of Open Access Journals (Sweden)

    Peter Kampmann

    2014-04-01

    Full Text Available With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  2. Robotic 3D scanner as an alternative to standard modalities of medical imaging.

    Science.gov (United States)

    Chromy, Adam; Zalud, Ludek

    2014-01-01

    There are special medical cases, where standard medical imaging modalities are able to offer sufficient results, but not in the optimal way. It means, that desired results are produced with unnecessarily high expenses, with redundant informations or with needless demands on patient. This paper deals with one special case, where information useful for examination is the body surface only, inner sight into the body is needless. New specialized medical imaging device is developed for this situation. In the Introduction section, analysis of presently used medical imaging modalities is presented, which declares, that no available imaging device is best fitting for mentioned purposes. In the next section, development of the new specialized medical imaging device is presented, and its principles and functions are described. Then, the parameters of new device are compared with present ones. It brings significant advantages comparing to present imaging systems.

  3. Quality Assurance of Serial 3D Image Registration, Fusion, and Segmentation

    International Nuclear Information System (INIS)

    Sharpe, Michael; Brock, Kristy K.

    2008-01-01

    Radiotherapy relies on images to plan, guide, and assess treatment. Image registration, fusion, and segmentation are integral to these processes; specifically for aiding anatomic delineation, assessing organ motion, and aligning targets with treatment beams in image-guided radiation therapy (IGRT). Future developments in image registration will also improve estimations of the actual dose delivered and quantitative assessment in patient follow-up exams. This article summarizes common and emerging technologies and reviews the role of image registration, fusion, and segmentation in radiotherapy processes. The current quality assurance practices are summarized, and implications for clinical procedures are discussed

  4. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    Directory of Open Access Journals (Sweden)

    V. Paelke

    2012-07-01

    Full Text Available Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to

  5. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    Science.gov (United States)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  6. Percolated microstructures for multi-modal transport enhancement in porous active materials

    Energy Technology Data Exchange (ETDEWEB)

    McKay, Ian Salmon; Yang, Sungwoo; Wang, Evelyn N.; Kim, Hyunho

    2018-03-13

    A method of forming a composite material for use in multi-modal transport includes providing three-dimensional graphene having hollow channels, enabling a polymer to wick into the hollow channels of the three-dimensional graphene, curing the polymer to form a cured three-dimensional graphene, adding an active material to the cured three-dimensional graphene to form a composite material, and removing the polymer from within the hollow channels. A composite material formed according to the method is also provided.

  7. Imaging modalities for the non-invasive diagnosis of endometriosis.

    Science.gov (United States)

    Nisenblat, Vicki; Bossuyt, Patrick M M; Farquhar, Cindy; Johnson, Neil; Hull, M Louise

    2016-02-26

    participants; sensitivity 0.95 (95% CI 0.90, 1.00), specificity 0.91 (95% CI 0.86, 0.97)) met the criteria for a replacement and SnNout triage test and approached the criteria for a SpPin test. For DIE, TVUS (nine studies, 12 data sets, 934 participants; sensitivity 0.79 (95% CI 0.69, 0.89) and specificity 0.94 (95% CI 0.88, 1.00)) approached the criteria for a SpPin triage test, and MRI (six studies, seven data sets, 266 participants; sensitivity 0.94 (95% CI 0.90, 0.97), specificity 0.77 (95% CI 0.44, 1.00)) approached the criteria for a replacement and SnNout triage test. Other imaging tests assessed in small individual studies could not be statistically evaluated.TVUS met the criteria for a SpPin triage test in mapping DIE to uterosacral ligaments, rectovaginal septum, vaginal wall, pouch of Douglas (POD) and rectosigmoid. MRI met the criteria for a SpPin triage test for POD and vaginal and rectosigmoid endometriosis. Transrectal ultrasonography (TRUS) might qualify as a SpPin triage test for rectosigmoid involvement but could not be adequately assessed for other anatomical sites because heterogeneous data were scant. Multi-detector computerised tomography enema (MDCT-e) displayed the highest diagnostic performance for rectosigmoid and other bowel endometriosis and met the criteria for both SpPin and SnNout triage tests, but studies were too few to provide meaningful results.Diagnostic accuracies were higher for TVUS with bowel preparation (TVUS-BP) and rectal water contrast (RWC-TVS) and for 3.0TMRI than for conventional methods, although the paucity of studies precluded statistical evaluation. None of the evaluated imaging modalities were able to detect overall pelvic endometriosis with enough accuracy that they would be suggested to replace surgery. Specifically for endometrioma, TVUS qualified as a SpPin triage test. MRI displayed sufficient accuracy to suggest utility as a replacement test, but the data were too scant to permit meaningful conclusions. TVUS could be

  8. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  9. Supervised Cross-Modal Factor Analysis for Multiple Modal Data Classification

    KAUST Repository

    Wang, Jingbin

    2015-10-09

    In this paper we study the problem of learning from multiple modal data for purpose of document classification. In this problem, each document is composed two different modals of data, i.e., An image and a text. Cross-modal factor analysis (CFA) has been proposed to project the two different modals of data to a shared data space, so that the classification of a image or a text can be performed directly in this space. A disadvantage of CFA is that it has ignored the supervision information. In this paper, we improve CFA by incorporating the supervision information to represent and classify both image and text modals of documents. We project both image and text data to a shared data space by factor analysis, and then train a class label predictor in the shared space to use the class label information. The factor analysis parameter and the predictor parameter are learned jointly by solving one single objective function. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projection measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple modal document data sets show the advantage of the proposed algorithm over other CFA methods.

  10. MEDCIS: Multi-Modality Epilepsy Data Capture and Integration System.

    Science.gov (United States)

    Zhang, Guo-Qiang; Cui, Licong; Lhatoo, Samden; Schuele, Stephan U; Sahoo, Satya S

    2014-01-01

    Sudden Unexpected Death in Epilepsy (SUDEP) is the leading mode of epilepsy-related death and is most common in patients with intractable, frequent, and continuing seizures. A statistically significant cohort of patients for SUDEP study requires meticulous, prospective follow up of a large population that is at an elevated risk, best represented by the Epilepsy Monitoring Unit (EMU) patient population. Multiple EMUs need to collaborate, share data for building a larger cohort of potential SUDEP patient using a state-of-the-art informatics infrastructure. To address the challenges of data integration and data access from multiple EMUs, we developed the Multi-Modality Epilepsy Data Capture and Integration System (MEDCIS) that combines retrospective clinical free text processing using NLP, prospective structured data capture using an ontology-driven interface, interfaces for cohort search and signal visualization, all in a single integrated environment. A dedicated Epilepsy and Seizure Ontology (EpSO) has been used to streamline the user interfaces, enhance its usability, and enable mappings across distributed databases so that federated queries can be executed. MEDCIS contained 936 patient data sets from the EMUs of University Hospitals Case Medical Center (UH CMC) in Cleveland and Northwestern Memorial Hospital (NMH) in Chicago. Patients from UH CMC and NMH were stored in different databases and then federated through MEDCIS using EpSO and our mapping module. More than 77GB of multi-modal signal data were processed using the Cloudwave pipeline and made available for rendering through the web-interface. About 74% of the 40 open clinical questions of interest were answerable accurately using the EpSO-driven VISual AGregagator and Explorer (VISAGE) interface. Questions not directly answerable were either due to their inherent computational complexity, the unavailability of primary information, or the scope of concept that has been formulated in the existing Ep

  11. A data fusion environment for multimodal and multi-informational neuronavigation.

    Science.gov (United States)

    Jannin, P; Fleig, O J; Seigneuret, E; Grova, C; Morandi, X; Scarabin, J M

    2000-01-01

    Part of the planning and performance of neurosurgery consists of determining target areas, areas to be avoided, landmark areas, and trajectories, all of which are components of the surgical script. Nowadays, neurosurgeons have access to multimodal medical imaging to support the definition of the surgical script. The purpose of this paper is to present a software environment developed by the authors that allows full multimodal and multi-informational planning as well as neuronavigation for epilepsy and tumor surgery. We have developed a data fusion environment dedicated to neuronavigation around the Surgical Microscope Neuronavigator system (Carl Zeiss, Oberkochen, Germany). This environment includes registration, segmentation, 3D visualization, and interaction-applied tools. It provides the neuronavigation system with the multimodal information involved in the definition of the surgical script: lesional areas, sulci, ventricles segmented from magnetic resonance imaging (MRI), vessels segmented from magnetic resonance angiography (MRA), functional areas from magneto-encephalography (MEG), and functional magnetic resonance imaging (fMRI) for somatosensory, motor, or language activation. These data are considered to be relevant for the performance of the surgical procedure. The definition of each entity results from the same procedure: registration to the anatomical MRI data set (defined as the reference data set), segmentation, fused 3D display, selection of the relevant entities for the surgical step, encoding in 3D surface-based representation, and storage of the 3D surfaces in a file recognized by the neuronavigation software (STP 3.4, Leibinger; Freiburg, Germany). Multimodal neuronavigation is illustrated with two clinical cases for which multimodal information was introduced into the neuronavigation system. Lesional areas were used to define and follow the surgical path, sulci and vessels helped identify the anatomical environment of the surgical field, and

  12. Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM

    OpenAIRE

    Park, Chanoh; Moghadam, Peyman; Kim, Soohwan; Elfes, Alberto; Fookes, Clinton; Sridharan, Sridha

    2017-01-01

    The concept of continuous-time trajectory representation has brought increased accuracy and efficiency to multi-modal sensor fusion in modern SLAM. However, regardless of these advantages, its offline property caused by the requirement of global batch optimization is critically hindering its relevance for real-time and life-long applications. In this paper, we present a dense map-centric SLAM method based on a continuous-time trajectory to cope with this problem. The proposed system locally f...

  13. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    Science.gov (United States)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  14. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    International Nuclear Information System (INIS)

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-01-01

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  15. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja [Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, Uttar Pradesh 226028 (India); Bao, Le Nguyen [Duytan University, Danang 550000 (Viet Nam); Lay-Ekuakille, Aimé [Department of Innovation Engineering, University of Salento, Lecce 73100 (Italy); Le, Dac-Nhuong, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn [Duytan University, Danang 550000 (Viet Nam); Haiphong University, Haiphong 180000 (Viet Nam)

    2016-07-15

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  16. A multi-modal approach to assessing recovery in youth athletes following concussion.

    Science.gov (United States)

    Reed, Nick; Murphy, James; Dick, Talia; Mah, Katie; Paniccia, Melissa; Verweel, Lee; Dobney, Danielle; Keightley, Michelle

    2014-09-25

    Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one's participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community.

  17. THERMAL AND VISIBLE SATELLITE IMAGE FUSION USING WAVELET IN REMOTE SENSING AND SATELLITE IMAGE PROCESSING

    Directory of Open Access Journals (Sweden)

    A. H. Ahrari

    2017-09-01

    Full Text Available Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar and different decomposition filters (mean.linear,ma,min and rand for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.

  18. Evaluation of Effective Parameters on Quality of Magnetic Resonance Imaging-computed Tomography Image Fusion in Head and Neck Tumors for Application in Treatment Planning

    Directory of Open Access Journals (Sweden)

    Atefeh Shirvani

    2017-01-01

    Full Text Available Background: In radiation therapy, computed tomography (CT simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P 4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.

  19. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  20. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Angel D. Sappa

    2016-06-01

    Full Text Available This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR and Long Wave InfraRed (LWIR.

  1. Computer simulation of multi-elemental fusion reactor materials

    International Nuclear Information System (INIS)

    Voertler, K.

    2011-01-01

    Thermonuclear fusion is a sustainable energy solution, in which energy is produced using similar processes as in the sun. In this technology hydrogen isotopes are fused to gain energy and consequently to produce electricity. In a fusion reactor hydrogen isotopes are confined by magnetic fields as ionized gas, the plasma. Since the core plasma is millions of degrees hot, there are special needs for the plasma-facing materials. Moreover, in the plasma the fusion of hydrogen isotopes leads to the production of high energetic neutrons which sets demanding abilities for the structural materials of the reactor. This thesis investigates the irradiation response of materials to be used in future fusion reactors. Interactions of the plasma with the reactor wall leads to the removal of surface atoms, migration of them, and formation of co-deposited layers such as tungsten carbide. Sputtering of tungsten carbide and deuterium trapping in tungsten carbide was investigated in this thesis. As the second topic the primary interaction of the neutrons in the structural material steel was examined. As model materials for steel iron chromium and iron nickel were used. This study was performed theoretically by the means of computer simulations on the atomic level. In contrast to previous studies in the field, in which simulations were limited to pure elements, in this work more complex materials were used, i.e. they were multi-elemental including two or more atom species. The results of this thesis are in the microscale. One of the results is a catalogue of atom species, which were removed from tungsten carbide by the plasma. Another result is e.g. the atomic distributions of defects in iron chromium caused by the energetic neutrons. These microscopic results are used in data bases for multiscale modelling of fusion reactor materials, which has the aim to explain the macroscopic degradation in the materials. This thesis is therefore a relevant contribution to investigate the

  2. Image fusion for enhanced forest structural assessment

    CSIR Research Space (South Africa)

    Roberts, JW

    2011-01-01

    Full Text Available This research explores the potential benefits of fusing active and passive medium resolution satellite-borne sensor data for forest structural assessment. Image fusion was applied as a means of retaining disparate data features relevant to modeling...

  3. MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH

    Data.gov (United States)

    National Aeronautics and Space Administration — MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Multispectral remote sensing images have...

  4. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    Science.gov (United States)

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Integrated ultrasound and gamma imaging probe for medical diagnosis

    International Nuclear Information System (INIS)

    Pani, R.; Pellegrini, R.; Cinti, M. N.; Polito, C.; Orlandi, C.; Fabbri, A.; Vincentis, G. De

    2016-01-01

    In the last few years, integrated multi-modality systems have been developed, aimed at improving the accuracy of medical diagnosis correlating information from different imaging techniques. In this contest, a novel dual modality probe is proposed, based on an ultrasound detector integrated with a small field of view single photon emission gamma camera. The probe, dedicated to visualize small organs or tissues located at short depths, performs dual modality images and permits to correlate morphological and functional information. The small field of view gamma camera consists of a continuous NaI:Tl scintillation crystal coupled with two multi-anode photomultiplier tubes. Both detectors were characterized in terms of position linearity and spatial resolution performances in order to guarantee the spatial correspondence between the ultrasound and the gamma images. Finally, dual-modality images of custom phantoms are obtained highlighting the good co-registration between ultrasound and gamma images, in terms of geometry and image processing, as a consequence of calibration procedures

  6. An object-oriented framework for medical image registration, fusion, and visualization.

    Science.gov (United States)

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  7. Chronic granulomatous disease: Value of the newer imaging modalities

    International Nuclear Information System (INIS)

    Stricof, D.D.; Glazer, M.; Amendola, A.

    1984-01-01

    The contribution of computed tomography (CT), ultrasound (US), and nuclear medicine studies in the evaluation and management of seven patients with chronic granulatous disease was retrospectively reviewed. These modalities proved valuable in detecting sites of infection, particularly in the abdomen. Three patients had liver abscesses, two had suppurative retroperitoneal lymphadenopathy, one had empyema, and one hand a scrotal abscess. Furthermore, CT or US-guided percutaneous aspiration and/or drainage of infected material was successfully performed on three separate occasions in a single patient, obviating the need for surgery. The newer imaging modalities are useful in the prompt diagnosis and in some instances non-operative therapy of complications of chronic granulomatous disease. (orig.)

  8. Thoracic neuroblastoma: what is the best imaging modality for evaluating extent of disease?

    International Nuclear Information System (INIS)

    Slovis, T.L.; Meza, M.P.; Cushing, B.; Elkowitz, S.S.; Leonidas, J.C.; Festa, R.; Kogutt, M.S.; Fletcher, B.D.

    1997-01-01

    Thoracic neuroblastoma accounts for 15% of all cases of neuroblastoma. A minority of children with thoracic neuroblastoma will have dumbbell tumors, i.e., intraspinal extension, but only half these patients will have neurologic signs or symptoms. Hypothesis. MR imaging is the single best test to evaluate the extent of thoracic and spinal disease in thoracid neuroblastoma after the diagnosis of a mass is estbalished on plain film. A retrospective multi-institutional investigation over 7 years of all cases of thoracic neuroblastoma (n=26) imaged with CT and/or MR were reviewed for detection of the extent of disease. The chest film, nuclear bone scan, and other imaging modalities were also reviewed. The surgical and histologic correlation in each case, as well as the patients' staging and outcome, were tabulated. The chest radiography was 100% sensitive in suggesting the diagnosis. MR imaging was 100% sensitive in predicting enlarged lymph nodes, intraspinal extension, and chest wall involvement. CT was 88% sensitive for intraspinal extension but only 20% sensitive for lymph node enlargement. CT was 100% sensitive in detecting chest wall involvement. Direct comparison of CT and MR imaging in six cases revealed no difference in detection of enlarged lymph nodes or chest wall involvement. Neither test was able to detect remote disease, as noted by bone scan. The chest film is 100% sensitive in suggesting the diagnosis of thoracic neuroblastoma; MR imaging appears to be the single best test for detecting nodal involvement, intraspinal extension, and chest wall involvement. (orig.)

  9. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  10. The TNO Multiband Image Data Collection

    NARCIS (Netherlands)

    Toet, A.

    2017-01-01

    Despite of the ongoing interest in the fusion of multi-band images for surveillance applications and a steady stream of publications in this area, there is only a very small number of static registered multi-band test images (and a total lack of dynamic image sequences) publicly available for the

  11. IMPROVING THE QUALITY OF NEAR-INFRARED IMAGING OF IN VIVOBLOOD VESSELS USING IMAGE FUSION METHODS

    DEFF Research Database (Denmark)

    Jensen, Andreas Kryger; Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard

    2009-01-01

    We investigate methods for improving the visual quality of in vivo images of blood vessels in the human forearm. Using a near-infrared light source and a dual CCD chip camera system capable of capturing images at visual and nearinfrared spectra, we evaluate three fusion methods in terms...... of their capability of enhancing the blood vessels while preserving the spectral signature of the original color image. Furthermore, we investigate a possibility of removing hair in the images using a fusion rule based on the "a trous" stationary wavelet decomposition. The method with the best overall performance...... with both speed and quality in mind is the Intensity Injection method. Using the developed system and the methods presented in this article, it is possible to create images of high visual quality with highly emphasized blood vessels....

  12. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  13. CT and MR image fusion using two different methods after prostate brachytherapy: impact on post-implant dosimetric assessment

    International Nuclear Information System (INIS)

    Servois, V.; El Khoury, C.; Lantoine, A.; Ollivier, L.; Neuenschwander, S.; Chauveinc, L.; Cosset, J.M.; Flam, T.; Rosenwald, J.C.

    2003-01-01

    To study different methods of CT and MR images fusion in patient treated by brachytherapy for localized prostate cancer. To compare the results of the dosimetric study realized on CT slices and images fusion. Fourteen cases of patients treated by 1125 were retrospectively studied. The CT examinations were realized with continuous section of 5 mm thickness, and MR images were obtained with a surface coil with contiguous section of 3 mm thickness. For the images fusion process, only the T2 weighted MR sequence was used. Two processes of images fusion were realized for each patient, using as reference marks the bones of the pelvis and the implanted seeds. A quantitative and qualitative appreciation was made by the operators, for each patient and both methods of images fusion. The dosimetric study obtained by a dedicated software was realized on CT images and all types of images fusion. The usual dosimetric indexes (D90, V 100 and V 150) were compared for each type of image. The quantitative results given by the software of images fusion showed a superior accuracy to the one obtained by the pelvic bony reference marks. Conversely, qualitative and quantitative results obtained by the operators showed a better accuracy of the images fusion based on iodine seeds. For two patients out of three presenting a D90 inferior to 145 Gy on CT examination, the D90 was superior to this norm when the dosimetry was based on images fusion, whatever the method used. The images fusion method based on implanted seed matching seems to be more precise than the one using bony reference marks. The dosimetric study realized on images fusion could allow to rectify possible errors, mainly due to difficulties in surrounding prostate contour delimitation on CT images. (authors)

  14. Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses

    Science.gov (United States)

    Shahbazian, Elisa

    1995-09-01

    Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.

  15. Multi-threshold white matter structural networks fusion for accurate diagnosis of Tourette syndrome children

    Science.gov (United States)

    Wen, Hongwei; Liu, Yue; Wang, Shengpei; Li, Zuoyong; Zhang, Jishui; Peng, Yun; He, Huiguang

    2017-03-01

    Tourette syndrome (TS) is a childhood-onset neurobehavioral disorder. To date, TS is still misdiagnosed due to its varied presentation and lacking of obvious clinical symptoms. Therefore, studies of objective imaging biomarkers are of great importance for early TS diagnosis. As tic generation has been linked to disturbed structural networks, and many efforts have been made recently to investigate brain functional or structural networks using machine learning methods, for the purpose of disease diagnosis. However, few studies were related to TS and some drawbacks still existed in them. Therefore, we propose a novel classification framework integrating a multi-threshold strategy and a network fusion scheme to address the preexisting drawbacks. Here we used diffusion MRI probabilistic tractography to construct the structural networks of 44 TS children and 48 healthy children. We ameliorated the similarity network fusion algorithm specially to fuse the multi-threshold structural networks. Graph theoretical analysis was then implemented, and nodal degree, nodal efficiency and nodal betweenness centrality were selected as features. Finally, support vector machine recursive feature extraction (SVM-RFE) algorithm was used for feature selection, and then optimal features are fed into SVM to automatically discriminate TS children from controls. We achieved a high accuracy of 89.13% evaluated by a nested cross validation, demonstrated the superior performance of our framework over other comparison methods. The involved discriminative regions for classification primarily located in the basal ganglia and frontal cortico-cortical networks, all highly related to the pathology of TS. Together, our study may provide potential neuroimaging biomarkers for early-stage TS diagnosis.

  16. SU-E-I-53: Variation in Measurements of Breast Skin Thickness Obtained Using Different Imaging Modalities

    International Nuclear Information System (INIS)

    Nguyen, U; Kumaraswamy, N; Markey, M

    2014-01-01

    Purpose: To investigate variation in measurements of breast skin thickness obtained using different imaging modalities, including mammography, computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI). Methods: Breast skin thicknesses as measured by mammography, CT, ultrasound, and MRI were compared. Mammographic measurements of skin thickness were obtained from published studies that utilized standard positioning (upright) and compression. CT measurements of skin thickness were obtained from a published study of a prototype breast CT scanner in which the women were in the prone position and the breast was uncompressed. Dermatological ultrasound exams of the breast skin were conducted at our institution, with the subjects in the upright position and the breast uncompressed. Breast skin thickness was calculated from breast MRI exams at our institution, with the patient in the prone position and the breast uncompressed. Results: T tests for independent samples demonstrated significant differences in the mean breast skin thickness as measured by different imaging modalities. Repeated measures ANOVA revealed significant differences in breast skin thickness across different quadrants of the breast for some modalities. Conclusion: The measurement of breast skin thickness is significantly different across different imaging modalities. Differences in the amount of compression and differences in patient positioning are possible reasons why measurements of breast skin thickness vary by modality

  17. Dual-Modality PET/Ultrasound imaging of the Prostate

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Jennifer S.; Moses, William W.; Pouliot, Jean; Hsu, I.C.

    2005-11-11

    Functional imaging with positron emission tomography (PET)will detect malignant tumors in the prostate and/or prostate bed, as well as possibly help determine tumor ''aggressiveness''. However, the relative uptake in a prostate tumor can be so great that few other anatomical landmarks are visible in a PET image. Ultrasound imaging with a transrectal probe provides anatomical detail in the prostate region that can be co-registered with the sensitive functional information from the PET imaging. Imaging the prostate with both PET and transrectal ultrasound (TRUS) will help determine the location of any cancer within the prostate region. This dual-modality imaging should help provide better detection and treatment of prostate cancer. LBNL has built a high performance positron emission tomograph optimized to image the prostate.Compared to a standard whole-body PET camera, our prostate-optimized PET camera has the same sensitivity and resolution, less backgrounds and lower cost. We plan to develop the hardware and software tools needed for a validated dual PET/TRUS prostate imaging system. We also plan to develop dual prostate imaging with PET and external transabdominal ultrasound, in case the TRUS system is too uncomfortable for some patients. We present the design and intended clinical uses for these dual imaging systems.

  18. Dual-Modality PET/Ultrasound imaging of the Prostate

    International Nuclear Information System (INIS)

    Huber, Jennifer S.; Moses, William W.; Pouliot, Jean; Hsu, I.C.

    2005-01-01

    Functional imaging with positron emission tomography (PET)will detect malignant tumors in the prostate and/or prostate bed, as well as possibly help determine tumor ''aggressiveness''. However, the relative uptake in a prostate tumor can be so great that few other anatomical landmarks are visible in a PET image. Ultrasound imaging with a transrectal probe provides anatomical detail in the prostate region that can be co-registered with the sensitive functional information from the PET imaging. Imaging the prostate with both PET and transrectal ultrasound (TRUS) will help determine the location of any cancer within the prostate region. This dual-modality imaging should help provide better detection and treatment of prostate cancer. LBNL has built a high performance positron emission tomograph optimized to image the prostate.Compared to a standard whole-body PET camera, our prostate-optimized PET camera has the same sensitivity and resolution, less backgrounds and lower cost. We plan to develop the hardware and software tools needed for a validated dual PET/TRUS prostate imaging system. We also plan to develop dual prostate imaging with PET and external transabdominal ultrasound, in case the TRUS system is too uncomfortable for some patients. We present the design and intended clinical uses for these dual imaging systems

  19. Architecture of the Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet)

    Energy Technology Data Exchange (ETDEWEB)

    Aiken, R.J.; Carlson, R.A.; Foster, I.T. [and others

    1997-01-01

    The research and education (R&E) community requires persistent and scaleable network infrastructure to concurrently support production and research applications as well as network research. In the past, the R&E community has relied on supporting parallel network and end-node infrastructures, which can be very expensive and inefficient for network service managers and application programmers. The grand challenge in networking is to provide support for multiple, concurrent, multi-layer views of the network for the applications and the network researchers, and to satisfy the sometimes conflicting requirements of both while ensuring one type of traffic does not adversely affect the other. Internet and telecommunications service providers will also benefit from a multi-modal infrastructure, which can provide smoother transitions to new technologies and allow for testing of these technologies with real user traffic while they are still in the pre-production mode. The authors proposed approach requires the use of as much of the same network and end system infrastructure as possible to reduce the costs needed to support both classes of activities (i.e., production and research). Breaking the infrastructure into segments and objects (e.g., routers, switches, multiplexors, circuits, paths, etc.) gives the capability to dynamically construct and configure the virtual active networks to address these requirements. These capabilities must be supported at the campus, regional, and wide-area network levels to allow for collaboration by geographically dispersed groups. The Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet) described in this report is an initial architecture and framework designed to identify and support the capabilities needed for the proposed combined infrastructure and to address related research issues.

  20. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.