WorldWideScience

Sample records for quakesim model image

  1. QuakeSim 2.0

    Science.gov (United States)

    Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Granat, Robert A.; Norton, Charles D.; Rundle, John B.; Pierce, Marlon E.; Fox, Geoffrey C.; McLeod, Dennis; Ludwig, Lisa Grant

    2012-01-01

    QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data

  2. QuakeSim: Multi-Source Synergistic Data Intensive Computing for Earth Science

    Data.gov (United States)

    National Aeronautics and Space Administration — Update QuakeSim services to integrate and rapidly fuse data from multiple sources to support comprehensive efforts in data mining, analysis, simulation, and...

  3. The QuakeSim Project: Web Services for Managing Geophysical Data and Applications

    Science.gov (United States)

    Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet

    2008-04-01

    We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.

  4. QuakeSim: a Web Service Environment for Productive Investigations with Earth Surface Sensor Data

    Science.gov (United States)

    Parker, J. W.; Donnellan, A.; Granat, R. A.; Lyzenga, G. A.; Glasscoe, M. T.; McLeod, D.; Al-Ghanmi, R.; Pierce, M.; Fox, G.; Grant Ludwig, L.; Rundle, J. B.

    2011-12-01

    The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.

  5. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  6. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  7. The model of illumination-transillumination for image enhancement of X-ray images

    Energy Technology Data Exchange (ETDEWEB)

    Lyu, Kwang Yeul [Shingu College, Sungnam (Korea, Republic of); Rhee, Sang Min [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2001-06-01

    In digital image processing, the homomorphic filtering approach is derived from an illumination - reflectance model of the image. It can also be used with an illumination-transillumination model X-ray film. Several X-ray images were applied to enhancement with histogram equalization and homomorphic filter based on an illumination-transillumination model. The homomorphic filter has proven theoretical claim of image density range compression and balanced contrast enhancement, and also was found a valuable tool to process analog X-ray images to digital images.

  8. Computer model for harmonic ultrasound imaging.

    Science.gov (United States)

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  9. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  10. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  11. Image-optimized Coronal Magnetic Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov [NASA Goddard Space Flight Center, Code 670, Greenbelt, MD 20771 (United States)

    2017-08-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.

  12. Image-Optimized Coronal Magnetic Field Models

    Science.gov (United States)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-01-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.

  13. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  14. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2009-01-01

    Full Text Available Abstract We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  15. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Mithun Das Gupta

    2009-01-01

    Full Text Available We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  16. A Learning State-Space Model for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.

  17. Correlation of breast image alignment using biomechanical modelling

    Science.gov (United States)

    Lee, Angela; Rajagopal, Vijay; Bier, Peter; Nielsen, Poul M. F.; Nash, Martyn P.

    2009-02-01

    Breast cancer is one of the most common causes of cancer death among women around the world. Researchers have found that a combination of imaging modalities (such as x-ray mammography, magnetic resonance, and ultrasound) leads to more effective diagnosis and management of breast cancers because each imaging modality displays different information about the breast tissues. In order to aid clinicians in interpreting the breast images from different modalities, we have developed a computational framework for generating individual-specific, 3D, finite element (FE) models of the breast. Medical images are embedded into this model, which is subsequently used to simulate the large deformations that the breasts undergo during different imaging procedures, thus warping the medical images to the deformed views of the breast in the different modalities. In this way, medical images of the breast taken in different geometric configurations (compression, gravity, etc.) can be aligned according to physically feasible transformations. In order to analyse the accuracy of the biomechanical model predictions, squared normalised cross correlation (NCC2) was used to provide both local and global comparisons of the model-warped images with clinical images of the breast subject to different gravity loaded states. The local comparison results were helpful in indicating the areas for improvement in the biomechanical model. To improve the modelling accuracy, we will need to investigate the incorporation of breast tissue heterogeneity into the model and altering the boundary conditions for the breast model. A biomechanical image registration tool of this kind will help radiologists to provide more reliable diagnosis and localisation of breast cancer.

  18. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    International Nuclear Information System (INIS)

    Duan, Jinming; Bai, Li; Tench, Christopher; Gottlob, Irene; Proudlock, Frank

    2015-01-01

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation. (paper)

  19. Validation of Diagnostic Imaging Based on Repeat Examinations. An Image Interpretation Model

    International Nuclear Information System (INIS)

    Isberg, B.; Jorulf, H.; Thorstensen, Oe.

    2004-01-01

    Purpose: To develop an interpretation model, based on repeatedly acquired images, aimed at improving assessments of technical efficacy and diagnostic accuracy in the detection of small lesions. Material and Methods: A theoretical model is proposed. The studied population consists of subjects that develop focal lesions which increase in size in organs of interest during the study period. The imaging modality produces images that can be re-interpreted with high precision, e.g. conventional radiography, computed tomography, and magnetic resonance imaging. At least four repeat examinations are carried out. Results: The interpretation is performed in four or five steps: 1. Independent readers interpret the examinations chronologically without access to previous or subsequent films. 2. Lesions found on images at the last examination are included in the analysis, with interpretation in consensus. 3. By concurrent back-reading in consensus, the lesions are identified on previous images until they are so small that even in retrospect they are undetectable. The earliest examination at which included lesions appear is recorded, and the lesions are verified by their growth (imaging reference standard). Lesion size and other characteristics may be recorded. 4. Records made at step 1 are corrected to those of steps 2 and 3. False positives are recorded. 5. (Optional) Lesion type is confirmed by another diagnostic test. Conclusion: Applied on subjects with progressive disease, the proposed image interpretation model may improve assessments of technical efficacy and diagnostic accuracy in the detection of small focal lesions. The model may provide an accurate imaging reference standard as well as repeated detection rates and false-positive rates for tested imaging modalities. However, potential review bias necessitates a strict protocol

  20. Parametric uncertainty in optical image modeling

    Science.gov (United States)

    Potzick, James; Marx, Egon; Davidson, Mark

    2006-10-01

    Optical photomask feature metrology and wafer exposure process simulation both rely on optical image modeling for accurate results. While it is fair to question the accuracies of the available models, model results also depend on several input parameters describing the object and imaging system. Errors in these parameter values can lead to significant errors in the modeled image. These parameters include wavelength, illumination and objective NA's, magnification, focus, etc. for the optical system, and topography, complex index of refraction n and k, etc. for the object. In this paper each input parameter is varied over a range about its nominal value and the corresponding images simulated. Second order parameter interactions are not explored. Using the scenario of the optical measurement of photomask features, these parametric sensitivities are quantified by calculating the apparent change of the measured linewidth for a small change in the relevant parameter. Then, using reasonable values for the estimated uncertainties of these parameters, the parametric linewidth uncertainties can be calculated and combined to give a lower limit to the linewidth measurement uncertainty for those parameter uncertainties.

  1. Non-rigid image registration using bone growth model

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten; Gramkow, Claus; Kreiborg, Sven

    1997-01-01

    Non-rigid registration has traditionally used physical models like elasticity and fluids. These models are very seldom valid models of the difference between the registered images. This paper presents a non-rigid registration algorithm, which uses a model of bone growth as a model of the change...... between time sequence images of the human mandible. By being able to register the images, this paper at the same time contributes to the validation of the growth model, which is based on the currently available medical theories and knowledge...

  2. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  3. EVALUATION OF RATIONAL FUNCTION MODEL FOR GEOMETRIC MODELING OF CHANG'E-1 CCD IMAGES

    Directory of Open Access Journals (Sweden)

    Y. Liu

    2012-08-01

    Full Text Available Rational Function Model (RFM is a generic geometric model that has been widely used in geometric processing of high-resolution earth-observation satellite images, due to its generality and excellent capability of fitting complex rigorous sensor models. In this paper, the feasibility and precision of RFM for geometric modeling of China's Chang'E-1 (CE-1 lunar orbiter images is presented. The RFM parameters of forward-, nadir- and backward-looking CE-1 images are generated though least squares solution using virtual control points derived from the rigorous sensor model. The precision of the RFM is evaluated by comparing with the rigorous sensor model in both image space and object space. Experimental results using nine images from three orbits show that RFM can precisely fit the rigorous sensor model of CE-1 CCD images with a RMS residual error of 1/100 pixel level in image space and less than 5 meters in object space. This indicates that it is feasible to use RFM to describe the imaging geometry of CE-1 CCD images and spacecraft position and orientation. RFM will enable planetary data centers to have an option to supply RFM parameters of orbital images while keeping the original orbit trajectory data confidential.

  4. The QuakeSim Project: Numerical Simulations for Active Tectonic Processes

    Science.gov (United States)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry

    2004-01-01

    In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.

  5. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  6. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  7. Joint model of motion and anatomy for PET image reconstruction

    International Nuclear Information System (INIS)

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-01-01

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem

  8. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....

  9. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  10. POLARIZATION IMAGING AND SCATTERING MODEL OF CANCEROUS LIVER TISSUES

    Directory of Open Access Journals (Sweden)

    DONGZHI LI

    2013-07-01

    Full Text Available We apply different polarization imaging techniques for cancerous liver tissues, and compare the relative contrasts for difference polarization imaging (DPI, degree of polarization imaging (DOPI and rotating linear polarization imaging (RLPI. Experimental results show that a number of polarization imaging parameters are capable of differentiating cancerous cells in isotropic liver tissues. To analyze the contrast mechanism of the cancer-sensitive polarization imaging parameters, we propose a scattering model containing two types of spherical scatterers and carry on Monte Carlo simulations based on this bi-component model. Both the experimental and Monte Carlo simulated results show that the RLPI technique can provide a good imaging contrast of cancerous tissues. The bi-component scattering model provides a useful tool to analyze the contrast mechanism of polarization imaging of cancerous tissues.

  11. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  12. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    Science.gov (United States)

    McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.

    2017-06-01

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  13. Modeling human faces with multi-image photogrammetry

    Science.gov (United States)

    D'Apuzzo, Nicola

    2002-03-01

    Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a

  14. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  15. pyBSM: A Python package for modeling imaging systems

    Science.gov (United States)

    LeMaster, Daniel A.; Eismann, Michael T.

    2017-05-01

    There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.

  16. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  17. Cardiovascular Imaging: What Have We Learned From Animal Models?

    Directory of Open Access Journals (Sweden)

    Arnoldo eSantos

    2015-10-01

    Full Text Available Cardiovascular imaging has become an indispensable tool for patient diagnosis and follow up. Probably the wide clinical applications of imaging are due to the possibility of a detailed and high quality description and quantification of cardiovascular system structure and function. Also phenomena that involve complex physiological mechanisms and biochemical pathways, such as inflammation and ischemia, can be visualized in a nondestructive way. The widespread use and evolution of imaging would not have been possible without animal studies. Animal models have allowed for instance, i the technical development of different imaging tools, ii to test hypothesis generated from human studies and finally, iii to evaluate the translational relevance assessment of in vitro and ex-vivo results. In this review, we will critically describe the contribution of animal models to the use of biomedical imaging in cardiovascular medicine. We will discuss the characteristics of the most frequent models used in/for imaging studies. We will cover the major findings of animal studies focused in the cardiovascular use of the repeatedly used imaging techniques in clinical practice and experimental studies. We will also describe the physiological findings and/or learning processes for imaging applications coming from models of the most common cardiovascular diseases. In these diseases, imaging research using animals has allowed the study of aspects such as: ventricular size, shape, global function and wall thickening, local myocardial function, myocardial perfusion, metabolism and energetic assessment, infarct quantification, vascular lesion characterization, myocardial fiber structure, and myocardial calcium uptake. Finally we will discuss the limitations and future of imaging research with animal models.

  18. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  19. Photometric Modeling of Simulated Surace-Resolved Bennu Images

    Science.gov (United States)

    Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.

    2017-12-01

    The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the

  20. Image-Based Models Using Crowdsourcing Strategy

    Directory of Open Access Journals (Sweden)

    Antonia Spanò

    2016-12-01

    Full Text Available The conservation and valorization of Cultural Heritage require an extensive documentation, both in properly historic-artistic terms and regarding the physical characteristics of position, shape, color, and geometry. With the use of digital photogrammetry that make acquisition of overlapping images for 3D photo modeling and with the development of dense and accurate 3D point models, it is possible to obtain high-resolution orthoprojections of surfaces.Recent years have seen a growing interest in crowdsourcing that holds in the field of the protection and dissemination of cultural heritage, in parallel there is an increasing awareness for contributing the generation of digital models with the immense wealth of images available on the web which are useful for documentation heritage.In this way, the availability and ease the automation of SfM (Structure from Motion algorithm enables the generation of digital models of the built heritage, which can be inserted positively in crowdsourcing processes. In fact, non-expert users can handle the technology in the process of acquisition, which today is one of the fundamental points to involve the wider public to the cultural heritage protection. To present the image based models and their derivatives that can be made from a great digital resource; the current approach is useful for the little-known heritage or not easily accessible buildings as an emblematic case study that was selected. It is the Vank Cathedral in Isfahan in Iran: the availability of accurate point clouds and reliable orthophotos are very convenient since the building of the Safavid epoch (cent. XVII-XVIII completely frescoed with the internal surfaces, which the architecture and especially the architectural decoration reach their peak.The experimental part of the paper explores also some aspects of usability of the digital output from the image based modeling methods. The availability of orthophotos allows and facilitates the iconographic

  1. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  2. The Halo Model of Origin Images

    DEFF Research Database (Denmark)

    Josiassen, Alexander; Lukas, Bryan A.; Whitwell, Gregory J.

    2013-01-01

    National origin has gained importance as a marketing tool for practitioners to sell their goods and services. However, because origin-image research has been troubled by several fundamental limitations, academia has become sceptical of the current status and strategic implications of the concept....... The aim of this paper was threefold, namely, to provide a state-of-the-art review of origin-image research in marketing, develop and empirically test a new origin-image model and, present the implications of the study....

  3. Image-Based Geometric Modeling and Mesh Generation

    CERN Document Server

    2013-01-01

    As a new interdisciplinary research area, “image-based geometric modeling and mesh generation” integrates image processing, geometric modeling and mesh generation with finite element method (FEM) to solve problems in computational biomedicine, materials sciences and engineering. It is well known that FEM is currently well-developed and efficient, but mesh generation for complex geometries (e.g., the human body) still takes about 80% of the total analysis time and is the major obstacle to reduce the total computation time. It is mainly because none of the traditional approaches is sufficient to effectively construct finite element meshes for arbitrarily complicated domains, and generally a great deal of manual interaction is involved in mesh generation. This contributed volume, the first for such an interdisciplinary topic, collects the latest research by experts in this area. These papers cover a broad range of topics, including medical imaging, image alignment and segmentation, image-to-mesh conversion,...

  4. Fuzzy object models for newborn brain MR image segmentation

    Science.gov (United States)

    Kobashi, Syoji; Udupa, Jayaram K.

    2013-03-01

    Newborn brain MR image segmentation is a challenging problem because of variety of size, shape and MR signal although it is the fundamental study for quantitative radiology in brain MR images. Because of the large difference between the adult brain and the newborn brain, it is difficult to directly apply the conventional methods for the newborn brain. Inspired by the original fuzzy object model introduced by Udupa et al. at SPIE Medical Imaging 2011, called fuzzy shape object model (FSOM) here, this paper introduces fuzzy intensity object model (FIOM), and proposes a new image segmentation method which combines the FSOM and FIOM into fuzzy connected (FC) image segmentation. The fuzzy object models are built from training datasets in which the cerebral parenchyma is delineated by experts. After registering FSOM with the evaluating image, the proposed method roughly recognizes the cerebral parenchyma region based on a prior knowledge of location, shape, and the MR signal given by the registered FSOM and FIOM. Then, FC image segmentation delineates the cerebral parenchyma using the fuzzy object models. The proposed method has been evaluated using 9 newborn brain MR images using the leave-one-out strategy. The revised age was between -1 and 2 months. Quantitative evaluation using false positive volume fraction (FPVF) and false negative volume fraction (FNVF) has been conducted. Using the evaluation data, a FPVF of 0.75% and FNVF of 3.75% were achieved. More data collection and testing are underway.

  5. Fuzzy modeling of electrical impedance tomography images of the lungs

    International Nuclear Information System (INIS)

    Tanaka, Harki; Ortega, Neli Regina Siqueira; Galizia, Mauricio Stanzione; Borges, Joao Batista; Amato, Marcelo Britto Passos

    2008-01-01

    Objectives: Aiming to improve the anatomical resolution of electrical impedance tomography images, we developed a fuzzy model based on electrical impedance tomography's high temporal resolution and on the functional pulmonary signals of perfusion and ventilation. Introduction: Electrical impedance tomography images carry information about both ventilation and perfusion. However, these images are difficult to interpret because of insufficient anatomical resolution, such that it becomes almost impossible to distinguish the heart from the lungs. Methods: Electrical impedance tomography data from an experimental animal model were collected during normal ventilation and apnoea while an injection of hypertonic saline was administered. The fuzzy model was elaborated in three parts: a modeling of the heart, the pulmonary ventilation map and the pulmonary perfusion map. Image segmentation was performed using a threshold method, and a ventilation/perfusion map was generated. Results: Electrical impedance tomography images treated by the fuzzy model were compared with the hypertonic saline injection method and computed tomography scan images, presenting good results. The average accuracy index was 0.80 when comparing the fuzzy modeled lung maps and the computed tomography scan lung mask. The average ROC curve area comparing a saline injection image and a fuzzy modeled pulmonary perfusion image was 0.77. Discussion: The innovative aspects of our work are the use of temporal information for the delineation of the heart structure and the use of two pulmonary functions for lung structure delineation. However, robustness of the method should be tested for the imaging of abnormal lung conditions. Conclusions: These results showed the adequacy of the fuzzy approach in treating the anatomical resolution uncertainties in electrical impedance tomography images. (author)

  6. A statistical model for radar images of agricultural scenes

    Science.gov (United States)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  7. Fisheye image rectification using spherical and digital distortion models

    Science.gov (United States)

    Li, Xin; Pi, Yingdong; Jia, Yanling; Yang, Yuhui; Chen, Zhiyong; Hou, Wenguang

    2018-02-01

    Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.

  8. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  9. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  10. Mathematical models for correction of images, obtained at radioisotope scan

    International Nuclear Information System (INIS)

    Glaz, A.; Lubans, A.

    2002-01-01

    The images, which obtained at radioisotope scintigraphy, contain distortions. Distortions appear as a result of absorption of radiation by patient's body's tissues. Two mathematical models for reducing of such distortions are proposed. Image obtained by only one gamma camera is used in the first mathematical model. Unfortunately, this model allows processing of the images only in case, when it can be assumed, that the investigated organ has a symmetric form. The images obtained by two gamma cameras are used in the second model. It gives possibility to assume that the investigated organ has non-symmetric form and to acquire more precise results. (authors)

  11. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  12. A 4DCT imaging-based breathing lung model with relative hysteresis

    Energy Technology Data Exchange (ETDEWEB)

    Miyawaki, Shinjiro; Choi, Sanghun [IIHR – Hydroscience & Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A. [Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [IIHR – Hydroscience & Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Mechanical and Industrial Engineering, The University of Iowa, 3131 Seamans Center, Iowa City, IA 52242 (United States)

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.

  13. From medical imaging data to 3D printed anatomical models.

    Directory of Open Access Journals (Sweden)

    Thore M Bücking

    Full Text Available Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  14. Solid models for CT/MR image display

    International Nuclear Information System (INIS)

    ManKovich, N.J.; Yue, A.; Kioumehr, F.; Ammirati, M.; Turner, S.

    1991-01-01

    Medical imaging can now take wider advantage of Computer-Aided-Manufacturing through rapid prototyping technologies (RPT) such as stereolithography, laser sintering, and laminated object manufacturing to directly produce solid models of patient anatomy from processed CT and MR images. While conventional surgical planning relies on consultation with the radiologist combined with direct reading and measurement of CT and MR studies, 3-D surface and volumetric display workstations are providing a more easily interpretable view of patient anatomy. RPT can provide the surgeon with a life size model of patient anatomy constructed layer by layer with full internal detail. The authors have developed a prototype image processing and model fabrication system based on stereolithography, which provides the neurosurgeon with models of the skull base. Parallel comparison of the mode with the original thresholded CT data and with a CRT displayed surface rendering showed that both have an accuracy of >99.6 percent. The measurements on the surface rendered display proved more difficult to exactly locate and yielded a standard deviation of 2.37 percent. This paper presents an accuracy study and discusses ways of assessing the quality of neurosurgical plans when 3-D models re made available as planning tools

  15. Modeling digital breast tomosynthesis imaging systems for optimization studies

    Science.gov (United States)

    Lau, Beverly Amy

    Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a

  16. Monte Carlo modeling of human tooth optical coherence tomography imaging

    International Nuclear Information System (INIS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-01-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth. (paper)

  17. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  18. Modeling Image Patches with a Generic Dictionary of Mini-Epitomes

    Science.gov (United States)

    Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.

    2015-01-01

    The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859

  19. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  20. Modelling of classical ghost images obtained using scattered light

    International Nuclear Information System (INIS)

    Crosby, S; Castelletto, S; Aruldoss, C; Scholten, R E; Roberts, A

    2007-01-01

    The images obtained in ghost imaging with pseudo-thermal light sources are highly dependent on the spatial coherence properties of the incident light. Pseudo-thermal light is often created by reducing the coherence length of a coherent source by passing it through a turbid mixture of scattering spheres. We describe a model for simulating ghost images obtained with such partially coherent light, using a wave-transport model to calculate the influence of the scattering on initially coherent light. The model is able to predict important properties of the pseudo-thermal source, such as the coherence length and the amplitude of the residual unscattered component of the light which influence the resolution and visibility of the final ghost image. We show that the residual ballistic component introduces an additional background in the reconstructed image, and the spatial resolution obtainable depends on the size of the scattering spheres

  1. Modelling of classical ghost images obtained using scattered light

    Energy Technology Data Exchange (ETDEWEB)

    Crosby, S; Castelletto, S; Aruldoss, C; Scholten, R E; Roberts, A [School of Physics, University of Melbourne, Victoria, 3010 (Australia)

    2007-08-15

    The images obtained in ghost imaging with pseudo-thermal light sources are highly dependent on the spatial coherence properties of the incident light. Pseudo-thermal light is often created by reducing the coherence length of a coherent source by passing it through a turbid mixture of scattering spheres. We describe a model for simulating ghost images obtained with such partially coherent light, using a wave-transport model to calculate the influence of the scattering on initially coherent light. The model is able to predict important properties of the pseudo-thermal source, such as the coherence length and the amplitude of the residual unscattered component of the light which influence the resolution and visibility of the final ghost image. We show that the residual ballistic component introduces an additional background in the reconstructed image, and the spatial resolution obtainable depends on the size of the scattering spheres.

  2. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  3. Modeling the National Ignition Facility neutron imaging system.

    Science.gov (United States)

    Wilson, D C; Grim, G P; Tregillis, I L; Wilke, M D; Patel, M V; Sepke, S M; Morgan, G L; Hatarik, R; Loomis, E N; Wilde, C H; Oertel, J A; Fatherley, V E; Clark, D D; Fittinghoff, D N; Bower, D E; Schmitt, M J; Marinak, M M; Munro, D H; Merrill, F E; Moran, M J; Wang, T-S F; Danly, C R; Hilko, R A; Batha, S H; Frank, M; Buckles, R

    2010-10-01

    Numerical modeling of the neutron imaging system for the National Ignition Facility (NIF), forward from calculated target neutron emission to a camera image, will guide both the reduction of data and the future development of the system. Located 28 m from target chamber center, the system can produce two images at different neutron energies by gating on neutron arrival time. The brighter image, using neutrons near 14 MeV, reflects the size and symmetry of the implosion "hot spot." A second image in scattered neutrons, 10-12 MeV, reflects the size and symmetry of colder, denser fuel, but with only ∼1%-7% of the neutrons. A misalignment of the pinhole assembly up to ±175 μm is covered by a set of 37 subapertures with different pointings. The model includes the variability of the pinhole point spread function across the field of view. Omega experiments provided absolute calibration, scintillator spatial broadening, and the level of residual light in the down-scattered image from the primary neutrons. Application of the model to light decay measurements of EJ399, BC422, BCF99-55, Xylene, DPAC-30, and Liquid A suggests that DPAC-30 and Liquid A would be preferred over the BCF99-55 scintillator chosen for the first NIF system, if they could be fabricated into detectors with sufficient resolution.

  4. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  5. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  6. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  7. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  8. Reconstruction of binary geological images using analytical edge and object models

    Science.gov (United States)

    Abdollahifard, Mohammad J.; Ahmadi, Sadegh

    2016-04-01

    Reconstruction of fields using partial measurements is of vital importance in different applications in geosciences. Solving such an ill-posed problem requires a well-chosen model. In recent years, training images (TI) are widely employed as strong prior models for solving these problems. However, in the absence of enough evidence it is difficult to find an adequate TI which is capable of describing the field behavior properly. In this paper a very simple and general model is introduced which is applicable to a fairly wide range of binary images without any modifications. The model is motivated by the fact that nearly all binary images are composed of simple linear edges in micro-scale. The analytic essence of this model allows us to formulate the template matching problem as a convex optimization problem having efficient and fast solutions. The model has the potential to incorporate the qualitative and quantitative information provided by geologists. The image reconstruction problem is also formulated as an optimization problem and solved using an iterative greedy approach. The proposed method is capable of recovering the image unknown values with accuracies about 90% given samples representing as few as 2% of the original image.

  9. Imaging infrared: Scene simulation, modeling, and real image tracking; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Triplett, Milton J.; Wolverton, James R.; Hubert, August J.

    1989-09-01

    Various papers on scene simulation, modeling, and real image tracking using IR imaging are presented. Individual topics addressed include: tactical IR scene generator, dynamic FLIR simulation in flight training research, high-speed dynamic scene simulation in UV to IR spectra, development of an IR sensor calibration facility, IR celestial background scene description, transmission measurement of optical components at cryogenic temperatures, diffraction model for a point-source generator, silhouette-based tracking for tactical IR systems, use of knowledge in electrooptical trackers, detection and classification of target formations in IR image sequences, SMPRAD: simplified three-dimensional cloud radiance model, IR target generator, recent advances in testing of thermal imagers, generic IR system models with dynamic image generation, modeling realistic target acquisition using IR sensors in multiple-observer scenarios, and novel concept of scene generation and comprehensive dynamic sensor test.

  10. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  11. Balanced sparse model for tight frames in compressed sensing magnetic resonance imaging.

    Directory of Open Access Journals (Sweden)

    Yunsong Liu

    Full Text Available Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B converges faster than previously proposed algorithms accelerated proximal algorithm (APG and alternating directional method of multipliers for balanced model (ADMM-B.

  12. CT radiation dose and image quality optimization using a porcine model.

    Science.gov (United States)

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2013-01-01

    To evaluate potential radiation dose savings and resultant image quality effects with regard to optimization of commonly performed computed tomography (CT) studies derived from imaging a porcine (pig) model. Imaging protocols for 4 clinical CT suites were developed based on the lowest milliamperage and kilovoltage, the highest pitch that could be set from current imaging protocol parameters, or both. This occurred before significant changes in noise, contrast, and spatial resolution were measured objectively on images produced from a quality assurance CT phantom. The current and derived phantom protocols were then applied to scan a porcine model for head, abdomen, and chest CT studies. Further optimized protocols were developed based on the same methodology as in the phantom study. The optimization achieved with respect to radiation dose and image quality was evaluated following data collection of radiation dose recordings and image quality review. Relative visual grading analysis of image quality criteria adapted from the European guidelines on radiology quality criteria for CT were used for studies completed with both the phantom-based or porcine-derived imaging protocols. In 5 out of 16 experimental combinations, the current clinical protocol was maintained. In 2 instances, the phantom protocol reduced radiation dose by 19% to 38%. In the remaining 9 instances, the optimization based on the porcine model further reduced radiation dose by 17% to 38%. The porcine model closely reflects anatomical structures in humans, allowing the grading of anatomical criteria as part of image quality review without radiation risks to human subjects. This study demonstrates that using a porcine model to evaluate CT optimization resulted in more radiation dose reduction than when imaging protocols were tested solely on quality assurance phantoms.

  13. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  14. Gallbladder shape extraction from ultrasound images using active contour models.

    Science.gov (United States)

    Ciecholewski, Marcin; Chochołowicz, Jakub

    2013-12-01

    Gallbladder function is routinely assessed using ultrasonographic (USG) examinations. In clinical practice, doctors very often analyse the gallbladder shape when diagnosing selected disorders, e.g. if there are turns or folds of the gallbladder, so extracting its shape from USG images using supporting software can simplify a diagnosis that is often difficult to make. The paper describes two active contour models: the edge-based model and the region-based model making use of a morphological approach, both designed for extracting the gallbladder shape from USG images. The active contour models were applied to USG images without lesions and to those showing specific disease units, namely, anatomical changes like folds and turns of the gallbladder as well as polyps and gallstones. This paper also presents modifications of the edge-based model, such as the method for removing self-crossings and loops or the method of dampening the inflation force which moves nodes if they approach the edge being determined. The user is also able to add a fragment of the approximated edge beyond which neither active contour model will move if this edge is incomplete in the USG image. The modifications of the edge-based model presented here allow more precise results to be obtained when extracting the shape of the gallbladder from USG images than if the morphological model is used. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.

  15. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    Science.gov (United States)

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    Science.gov (United States)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  17. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi

    2017-08-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.

  18. Modulation transfer function cascade model for a sampled IR imaging system.

    Science.gov (United States)

    de Luca, L; Cardone, G

    1991-05-01

    The performance of the infrared scanning radiometer (IRSR) is strongly stressed in convective heat transfer applications where high spatial frequencies in the signal that describes the thermal image are present. The need to characterize more deeply the system spatial resolution has led to the formulation of a cascade model for the evaluation of the actual modulation transfer function of a sampled IR imaging system. The model can yield both the aliasing band and the averaged modulation response for a general sampling subsystem. For a line scan imaging system, which is the case of a typical IRSR, a rule of thumb that states whether the combined sampling-imaging system is either imaging-dependent or sampling-dependent is proposed. The model is tested by comparing it with other noncascade models as well as by ad hoc measurements performed on a commercial digitized IRSR.

  19. Modeling decision-making in single- and multi-modal medical images

    Science.gov (United States)

    Canosa, R. L.; Baum, K. G.

    2009-02-01

    This research introduces a mode-specific model of visual saliency that can be used to highlight likely lesion locations and potential errors (false positives and false negatives) in single-mode PET and MRI images and multi-modal fused PET/MRI images. Fused-modality digital images are a relatively recent technological improvement in medical imaging; therefore, a novel component of this research is to characterize the perceptual response to these fused images. Three different fusion techniques were compared to single-mode displays in terms of observer error rates using synthetic human brain images generated from an anthropomorphic phantom. An eye-tracking experiment was performed with naÃve (non-radiologist) observers who viewed the single- and multi-modal images. The eye-tracking data allowed the errors to be classified into four categories: false positives, search errors (false negatives never fixated), recognition errors (false negatives fixated less than 350 milliseconds), and decision errors (false negatives fixated greater than 350 milliseconds). A saliency model consisting of a set of differentially weighted low-level feature maps is derived from the known error and ground truth locations extracted from a subset of the test images for each modality. The saliency model shows that lesion and error locations attract visual attention according to low-level image features such as color, luminance, and texture.

  20. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    Science.gov (United States)

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  1. VERIFICATION OF 3D BUILDING MODELS USING MUTUAL INFORMATION IN AIRBORNE OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    A. P. Nyaruhuma

    2012-07-01

    Full Text Available This paper describes a method for automatic verification of 3D building models using airborne oblique images. The problem being tackled is identifying buildings that are demolished or changed since the models were constructed or identifying wrong models using the images. The models verified are of CityGML LOD2 or higher since their edges are expected to coincide with actual building edges. The verification approach is based on information theory. Corresponding variables between building models and oblique images are used for deriving mutual information for individual edges, faces or whole buildings, and combined for all perspective images available for the building. The wireframe model edges are projected to images and verified using low level image features – the image pixel gradient directions. A building part is only checked against images in which it may be visible. The method has been tested with models constructed using laser points against Pictometry images that are available for most cities of Europe and may be publically viewed in the so called Birds Eye view of the Microsoft Bing Maps. Results are that nearly all buildings are correctly categorised as existing or demolished. Because we now concentrate only on roofs we also used the method to test and compare results from nadir images. This comparison made clear that especially height errors in models can be more reliably detected in oblique images because of the tilted view. Besides overall building verification, results per individual edges can be used for improving the 3D building models.

  2. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  3. Modelling of microcracks image treated with fluorescent dye

    Science.gov (United States)

    Glebov, Victor; Lashmanov, Oleg U.

    2015-06-01

    The main reasons of catastrophes and accidents are high level of wear of equipment and violation of the production technology. The methods of nondestructive testing are designed to find out defects timely and to prevent break down of aggregates. These methods allow determining compliance of object parameters with technical requirements without destroying it. This work will discuss dye penetrant inspection or liquid penetrant inspection (DPI or LPI) methods and computer model of microcracks image treated with fluorescent dye. Usually cracks on image look like broken extended lines with small width (about 1 to 10 pixels) and ragged edges. The used method of inspection allows to detect microcracks with depth about 10 or more micrometers. During the work the mathematical model of image of randomly located microcracks treated with fluorescent dye was created in MATLAB environment. Background noises and distortions introduced by the optical systems are considered in the model. The factors that have influence on the image are listed below: 1. Background noise. Background noise is caused by the bright light from external sources and it reduces contrast on the objects edges. 2. Noises on the image sensor. Digital noise manifests itself in the form of randomly located points that are differing in their brightness and color. 3. Distortions caused by aberrations of optical system. After passing through the real optical system the homocentricity of the bundle of rays is violated or homocentricity remains but rays intersect at the point that doesn't coincide with the point of the ideal image. The stronger the influence of the above-listed factors, the worse the image quality and therefore the analysis of the image for control of the item finds difficulty. The mathematical model is created using the following algorithm: at the beginning the number of cracks that will be modeled is entered from keyboard. Then the point with random position is choosing on the matrix whose size is

  4. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  5. Automated image analysis of lateral lumber X-rays by a form model

    International Nuclear Information System (INIS)

    Mahnken, A.H.; Kohnen, M.; Steinberg, S.; Wein, B.B.; Guenther, R.W.

    2001-01-01

    Development of a software for fully automated image analysis of lateral lumbar spine X-rays. Material and method: Using the concept of active shape models, we developed a software that produces a form model of the lumbar spine from lateral lumbar spine radiographs and runs an automated image segmentation. This model is able to detect lumbar vertebrae automatically after the filtering of digitized X-ray images. The model was trained with 20 lateral lumbar spine radiographs with no pathological findings before we evaluated the software with 30 further X-ray images which were sorted by image quality ranging from one (best) to three (worst). There were 10 images for each quality. Results: Image recognition strongly depended on image quality. In group one 52 and in group two 51 out of 60 vertebral bodies including the sacrum were recognized, but in group three only 18 vertebral bodies were properly identified. Conclusion: Fully automated and reliable recognition of vertebral bodies from lateral spine radiographs using the concept of active shape models is possible. The precision of this technique is limited by the superposition of different structures. Further improvements are necessary. Therefore standardized image quality and enlargement of the training data set are required. (orig.) [de

  6. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  7. Development and practice for a PACS-based interactive teaching model for CT image

    International Nuclear Information System (INIS)

    Tian Junzhang; Jiang Guihua; Zheng Liyin; Wang Ling; Wenhua; Liang Lianbao

    2002-01-01

    Objective: To explore the interactive teaching model for CT imaging based on PACS, and provide the clinician and young radiologist with continued medical education. Methods: 100 M trunk net was adopted in PACS and 10 M was exchanged on desktop. Teaching model was installed in browse and diagnosis workstation. Teaching contents were classified according to region and managed according to branch model. Text data derived from authoritative textbooks, monograph, and periodicals. Imaging data derived from cases proved by pathology and clinic. The data were obtained through digital camera and scanner or from PACS. After edited and transformed into standard digital image through DICOM server, they were saved in HD of PACS image server with file form. Results: Teaching model for CT imaging provided kinds of cases of CT sign, clinic characteristics, pathology and distinguishing diagnosis. Normal section anatomy, typical image, and its notation could be browsed real time. Teaching model for CT imaging could provide reference to teaching, diagnosis and report. Conclusion: PACS-based teaching model for CT imaging could provide interactive teaching and scientific research tool and improve work quality and efficiency

  8. Imaging system models for small-bore DOI-PET scanners

    International Nuclear Information System (INIS)

    Takahashi, Hisashi; Kobayashi, Tetsuya; Yamaya, Taiga; Murayama, Hideo; Kitamura, Keishi; Hasegawa, Tomoyuki; Suga, Mikio

    2006-01-01

    Depth-of-interaction (DOI) information, which improves resolution uniformity in the field of view (FOV), is expected to lead to high-sensitivity PET scanners with small-bore detector rings. We are developing small-bore PET scanners with DOI detectors arranged in hexagonal or overlapped tetragonal patterns for small animal imaging or mammography. It is necessary to optimize the imaging system model because these scanners exhibit irregular detector sampling. In this work, we compared two imaging system models: (a) a parallel sub-LOR model in which the detector response functions (DRFs) are assumed to be uniform along the line of responses (LORs) and (b) a sub-crystal model in which each crystal is divided into a set of smaller volumes. These two models were applied to the overlapped tetragonal scanner (FOV 38.1 mm in diameter) and the hexagonal scanner (FOV 85.2 mm in diameter) simulated by GATE. We showed that the resolution non-uniformity of system model (b) was improved by 40% compared with that of system model (a) in the overlapped tetragonal scanner and that the resolution non-uniformity of system model (a) was improved by 18% compared with that of system model (b) in the hexagonal scanner. These results indicate that system model (b) should be applied to the overlapped tetragonal scanner and system model (a) should be applied to the hexagonal scanner. (author)

  9. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  10. Comparison of Model Predictions of Image Quality with Results of Clinical Trials in Chest and Lumbar Spine Screen-film Imaging

    International Nuclear Information System (INIS)

    Sandborg, M.; McVey, G.; Dance, D.R.; Carlsson, G.A.

    2000-01-01

    The ability to predict image quality from known physical and technical parameters is a prerequisite for making successful dose optimisation. In this study, imaging systems have been simulated using a Monte Carlo model of the imaging systems. The model includes a voxelised human anatomy and quantifies image quality in terms of contrast and signal-to-noise ratio for 5-6 anatomical details included in the anatomy. The imaging systems used in clinical trials were simulated and the ranking of the systems by the model and radiologists compared. The model and the results of the trial for chest PA both show that using a high maximum optical density was significantly better than using a low one. The model predicts that a good system is characterised by a large dynamic range and a high contrast of the blood vessels in the retrocardiac area. The ranking by the radiologists and the model agreed for the lumbar spine AP. (author)

  11. Modelling the Image Research of a Tourism Destination

    Directory of Open Access Journals (Sweden)

    Nicolae Teodorescu

    2014-11-01

    Full Text Available The problematic area of the tourism destination image has a high expansion in marketing, the efforts of its conceptualization and phenomenalism being remarkable among specialists. In this context, the authors propose a systemic approach, the result of which refers to a model regarding the image research of a tourism destination, whose validation has been attained using Transalpina destination. The model created by the authors envisages morphological features and specific functional relationships, which are consistent with the marketing theory, and, in context, with the consumer behaviour theory. The conceptualmethodological solutions are magnified by applicative-experimental validations, which enhance the theoretical and practical valences of the created model. The main direction of developing the elaborated model consists in efforts of formalization and abstracting, in the perspective offered by several scientific disciplines.

  12. A Space-Time Periodic Task Model for Recommendation of Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Xiuhong Zhang

    2018-01-01

    Full Text Available With the rapid development of remote sensing technology, the quantity and variety of remote sensing images are growing so quickly that proactive and personalized access to data has become an inevitable trend. One of the active approaches is remote sensing image recommendation, which can offer related image products to users according to their preference. Although multiple studies on remote sensing retrieval and recommendation have been performed, most of these studies model the user profiles only from the perspective of spatial area or image features. In this paper, we propose a spatiotemporal recommendation method for remote sensing data based on the probabilistic latent topic model, which is named the Space-Time Periodic Task model (STPT. User retrieval behaviors of remote sensing images are represented as mixtures of latent tasks, which act as links between users and images. Each task is associated with the joint probability distribution of space, time and image characteristics. Meanwhile, the von Mises distribution is introduced to fit the distribution of tasks over time. Then, we adopt Gibbs sampling to learn the random variables and parameters and present the inference algorithm for our model. Experiments show that the proposed STPT model can improve the capability and efficiency of remote sensing image data services.

  13. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model

    International Nuclear Information System (INIS)

    Zhou, Jian; Qi, Jinyi

    2014-01-01

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D time-of-flight PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon’s ray-tracer, we propose another more simplified geometrical projector based on the Bresenham’s ray-tracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a non-factored model, while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve the optimal reconstruction performance based on a sparse factorization model with an image domain resolution model. (paper)

  14. Evaluation of a Mathematical Model for Digital Image Enhancement.

    Science.gov (United States)

    Geha, Hassem; Nasseh, Ibrahim; Noujeim, Marcel

    2015-01-01

    The purpose of this study is to compare the detected number of holes on a stepwedge on images resulting from the application of the 5th degree polynomial model compared to the images resulting from the application of linear enhancement. Material and Methods : A 10-step aluminum step wedge with holes randomly drilled on each step was exposed with three different kVp and five exposure times per kVp on a Schick33(®) sensor. The images were enhanced by brightness/contrast adjustment, histogram equalization and with the 5th degree polynomial model and compared to the original non-enhanced images by six observers in two separate readings. Results : There was no significant difference between the readers and between the first and second reading. There was a significant three-factor interaction among Method, Exposure time, and kVp in detecting holes. The overall pattern was: "Poly" results in the highest counts, "Original" in the lowest counts, with "B/C" and "Equalized" intermediate. Conclusion : The 5th degree polynomial model showed more holes when compared to the other modalities.

  15. Seismic Full Waveform Modeling & Imaging in Attenuating Media

    Science.gov (United States)

    Guo, Peng

    Seismic attenuation strongly affects seismic waveforms by amplitude loss and velocity dispersion. Without proper inclusion of Q parameters, errors can be introduced for seismic full waveform modeling and imaging. Three different (Carcione's, Robertsson's, and the generalized Robertsson's) isotropic viscoelastic wave equations based on the generalized standard linear solid (GSLS) are evaluated. The second-order displacement equations are derived, and used to demonstrate that, with the same stress relaxation times, these viscoelastic formulations are equivalent. By introducing separate memory variables for P and S relaxation functions, Robertsson's formulation is generalized to allow different P and S wave stress relaxation times, which improves the physical consistency of the Qp and Qs modelled in the seismograms.The three formulations have comparable computational cost. 3D seismic finite-difference forward modeling is applied to anisotropic viscoelastic media. The viscoelastic T-matrix (a dynamic effective medium theory) relates frequency-dependent anisotropic attenuation and velocity to reservoir properties in fractured HTI media, based on the meso-scale fluid flow attenuation mechanism. The seismic signatures resulting from changing viscoelastic reservoir properties are easily visible. Analysis of 3D viscoelastic seismograms suggests that anisotropic attenuation is a potential tool for reservoir characterization. To compensate the Q effects during reverse-time migration (RTM) in viscoacoustic and viscoelastic media, amplitudes need to be compensated during wave propagation; the propagation velocity of the Q-compensated wavefield needs to be the same as in the attenuating wavefield, to restore the phase information. Both amplitude and phase can be compensated when the velocity dispersion and the amplitude loss are decoupled. For wave equations based on the GSLS, because Q effects are coupled in the memory variables, Q-compensated wavefield propagates faster than

  16. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  17. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  18. An three-dimensional imaging algorithm based on the radiation model of electric dipole

    International Nuclear Information System (INIS)

    Tian Bo; Zhong Weijun; Tong Chuangming

    2011-01-01

    A three-dimensional imaging algorithm based on the radiation model of dipole (DBP) is presented. On the foundation of researching the principle of the back projection (BP) algorithm, the relationship between the near field imaging model and far field imaging model is analyzed based on the scattering model. Firstly, the far field sampling data is transferred to the near field sampling data through applying the radiation theory of dipole. Then the dealt sampling data was projected to the imaging region to obtain the images of targets. The capability of the new algorithm to detect targets is verified by using finite-difference time-domain method (FDTD), and the coupling effect for imaging is analyzed. (authors)

  19. Segmentation of laser range radar images using hidden Markov field models

    International Nuclear Information System (INIS)

    Pucar, P.

    1993-01-01

    Segmentation of images in the context of model based stochastic techniques is connected with high, very often unpracticle computational complexity. The objective with this thesis is to take the models used in model based image processing, simplify and use them in suboptimal, but not computationally demanding algorithms. Algorithms that are essentially one-dimensional, and their extensions to two dimensions are given. The model used in this thesis is the well known hidden Markov model. Estimation of the number of hidden states from observed data is a problem that is addressed. The state order estimation problem is of general interest and is not specifically connected to image processing. An investigation of three state order estimation techniques for hidden Markov models is given. 76 refs

  20. Superresolving Black Hole Images with Full-Closure Sparse Modeling

    Science.gov (United States)

    Crowley, Chelsea; Akiyama, Kazunori; Fish, Vincent

    2018-01-01

    It is believed that almost all galaxies have black holes at their centers. Imaging a black hole is a primary objective to answer scientific questions relating to relativistic accretion and jet formation. The Event Horizon Telescope (EHT) is set to capture images of two nearby black holes, Sagittarius A* at the center of the Milky Way galaxy roughly 26,000 light years away and the other M87 which is in Virgo A, a large elliptical galaxy that is 50 million light years away. Sparse imaging techniques have shown great promise for reconstructing high-fidelity superresolved images of black holes from simulated data. Previous work has included the effects of atmospheric phase errors and thermal noise, but not systematic amplitude errors that arise due to miscalibration. We explore a full-closure imaging technique with sparse modeling that uses closure amplitudes and closure phases to improve the imaging process. This new technique can successfully handle data with systematic amplitude errors. Applying our technique to synthetic EHT data of M87, we find that full-closure sparse modeling can reconstruct images better than traditional methods and recover key structural information on the source, such as the shape and size of the predicted photon ring. These results suggest that our new approach will provide superior imaging performance for data from the EHT and other interferometric arrays.

  1. Imaging of structures in the high-latitude ionosphere: model comparisons

    Directory of Open Access Journals (Sweden)

    D. W. Idenden

    Full Text Available The tomographic reconstruction technique generates a two-dimensional latitude versus height electron density distribution from sets of slant total electron content measurements (TEC along ray paths between beacon satellites and ground-based radio receivers. In this note, the technique is applied to TEC values obtained from data simulated by the Sheffield/UCL/SEL Coupled Thermosphere/Ionosphere/Model (CTIM. A comparison of the resulting reconstructed image with the 'input' modelled data allows for verification of the reconstruction technique. All the features of the high-latitude ionosphere in the model data are reproduced well in the tomographic image. Reconstructed vertical TEC values follow closely the modelled values, with the F-layer maximum density (NmF2 agreeing generally within about 10%. The method has also been able successfully to reproduce underlying auroral-E ionisation over a restricted latitudinal range in part of the image. The height of the F2 peak is generally in agreement to within about the vertical image resolution (25 km.

    Key words. Ionosphere (modelling and forecasting; polar ionosphere · Radio Science (instruments and techniques

  2. [Research progress of multi-model medical image fusion and recognition].

    Science.gov (United States)

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  3. [Evaluating the maturity of IT-supported clinical imaging and diagnosis using the Digital Imaging Adoption Model : Are your clinical imaging processes ready for the digital era?

    Science.gov (United States)

    Studzinski, J

    2017-06-01

    The Digital Imaging Adoption Model (DIAM) has been jointly developed by HIMSS Analytics and the European Society of Radiology (ESR). It helps evaluate the maturity of IT-supported processes in medical imaging, particularly in radiology. This eight-stage maturity model drives your organisational, strategic and tactical alignment towards imaging-IT planning. The key audience for the model comprises hospitals with imaging centers, as well as external imaging centers that collaborate with hospitals. The assessment focuses on different dimensions relevant to digital imaging, such as software infrastructure and usage, workflow security, clinical documentation and decision support, data exchange and analytical capabilities. With its standardised approach, it enables regional, national and international benchmarking. All DIAM participants receive a structured report that can be used as a basis for presenting, e.g. budget planning and investment decisions at management level.

  4. Beam-hardening correction in CT based on basis image and TV model

    International Nuclear Information System (INIS)

    Li Qingliang; Yan Bin; Li Lei; Sun Hongsheng; Zhang Feng

    2012-01-01

    In X-ray computed tomography, the beam hardening leads to artifacts and reduces the image quality. It analyzes how beam hardening influences on original projection. According, it puts forward a kind of new beam-hardening correction method based on the basis images and TV model. Firstly, according to physical characteristics of the beam hardening an preliminary correction model with adjustable parameters is set up. Secondly, using different parameters, original projections are operated by the correction model. Thirdly, the projections are reconstructed to obtain a series of basis images. Finally, the linear combination of basis images is the final reconstruction image. Here, with total variation for the final reconstruction image as the cost function, the linear combination coefficients for the basis images are determined according to iterative method. To verify the effectiveness of the proposed method, the experiments are carried out on real phantom and industrial part. The results show that the algorithm significantly inhibits cup and strip artifacts in CT image. (authors)

  5. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    International Nuclear Information System (INIS)

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-01-01

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  6. Point spread function modeling and image restoration for cone-beam CT

    International Nuclear Information System (INIS)

    Zhang Hua; Shi Yikai; Huang Kuidong; Xu Zhe

    2015-01-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. (authors)

  7. Software to model AXAF-I image quality

    Science.gov (United States)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  8. Muscles of mastication model-based MR image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Ong, S.H. [National Univ. of Singapore (Singapore). Dept. of Electrical and Computer Engineering; National Univ. of Singapore (Singapore). Div. of Bioengineering; Hu, Q.; Nowinski, W.L. [Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National Univ. of Singapore (Singapore). Dept. of Preventive Dentistry; Goh, P.S. [National Univ. of Singapore (Singapore). Dept. of Diagnostic Radiology

    2006-11-15

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  9. Muscles of mastication model-based MR image segmentation

    International Nuclear Information System (INIS)

    Ng, H.P.; Agency for Science Technology and Research, Singapore; Ong, S.H.; National Univ. of Singapore; Hu, Q.; Nowinski, W.L.; Foong, K.W.C.; National Univ. of Singapore; Goh, P.S.

    2006-01-01

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  10. Modelling land degradation in IMAGE 2

    NARCIS (Netherlands)

    Hootsmans RM; Bouwman AF; Leemans R; Kreileman GJJ; MNV

    2001-01-01

    Food security may be threatened by loss of soil productivity as a result of human-induced land degradation. Water erosion is the most important cause of land degradation, and its effects are irreversible. This report describes the IMAGE land degradation model developed for describing current and

  11. Range and Image Based Modelling: a way for Frescoed Vault Texturing Optimization

    Science.gov (United States)

    Caroti, G.; Martínez-Espejo Zaragoza, I.; Piemonte, A.

    2015-02-01

    In the restoration of the frescoed vaults it is not only important to know the geometric shape of the painted surface, but it is essential to document its chromatic characterization and conservation status. The new techniques of range-based and image-based modelling, each with its limitations and advantages, offer a wide range of methods to obtain the geometric shape. In fact, several studies widely document that laser scanning enable obtaining three-dimensional models with high morphological precision. However, the quality level of the colour obtained with built-in laser scanner cameras is not comparable to that obtained for the shape. It is possible to improve the texture quality by means of a dedicated photographic campaign. This procedure, however, requires to calculate the external orientation of each image identifying the control points on it and on the model through a costly step of post processing. With image-based modelling techniques it is possible to obtain models that maintain the colour quality of the original images, but with variable geometric precision, locally lower than the laser scanning model. This paper presents a methodology that uses the camera external orientation parameters calculated by image based modelling techniques to project the same image on the model obtained from the laser scan. This methodology is tested on an Italian mirror (a schifo) frescoed vault. In the paper the different models, the analysis of precision and the efficiency evaluation of proposed methodology are presented.

  12. Imaging noradrenergic influence on amyloid pathology in mouse models of Alzheimer's disease

    International Nuclear Information System (INIS)

    Winkeler, A.; Waerzeggers, Y.; Klose, A.; Monfared, P.; Thomas, A.V.; Jacobs, A.H.; Schubert, M.; Heneka, M.T.

    2008-01-01

    Molecular imaging aims towards the non-invasive characterization of disease-specific molecular alterations in the living organism in vivo. In that, molecular imaging opens a new dimension in our understanding of disease pathogenesis, as it allows the non-invasive determination of the dynamics of changes on the molecular level. The imaging technology being employed includes magnetic resonance imaging (MRI) and nuclear imaging as well as optical-based imaging technologies. These imaging modalities are employed together or alone for disease phenotyping, development of imaging-guided therapeutic strategies and in basic and translational research. In this study, we review recent investigations employing positron emission tomography and MRI for phenotyping mouse models of Alzheimers' disease by imaging. We demonstrate that imaging has an important role in the characterization of mouse models of neurodegenerative diseases. (orig.)

  13. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  14. Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions.

    Science.gov (United States)

    Wang, Kun; Schoonover, Robert W; Su, Richard; Oraevsky, Alexander; Anastasio, Mark A

    2014-05-01

    Optoacoustic tomography (OAT), also known as photoacoustic tomography, is an emerging computed biomedical imaging modality that exploits optical contrast and ultrasonic detection principles. Iterative image reconstruction algorithms that are based on discrete imaging models are actively being developed for OAT due to their ability to improve image quality by incorporating accurate models of the imaging physics, instrument response, and measurement noise. In this work, we investigate the use of discrete imaging models based on Kaiser-Bessel window functions for iterative image reconstruction in OAT. A closed-form expression for the pressure produced by a Kaiser-Bessel function is calculated, which facilitates accurate computation of the system matrix. Computer-simulation and experimental studies are employed to demonstrate the potential advantages of Kaiser-Bessel function-based iterative image reconstruction in OAT.

  15. Image Analysis of a Negatively Curved Graphitic Sheet Model for Amorphous Carbon

    Science.gov (United States)

    Bursill, L. A.; Bourgeois, Laure N.

    High-resolution electron micrographs are presented which show essentially curved single sheets of graphitic carbon. Image calculations are then presented for the random surface schwarzite-related model of Townsend et al. (Phys. Rev. Lett. 69, 921-924, 1992). Comparison with experimental images does not rule out the contention that such models, containing surfaces of negative curvature, may be useful for predicting some physical properties of specific forms of nanoporous carbon. Some difficulties of the model predictions, when compared with the experimental images, are pointed out. The range of application of this model, as well as competing models, is discussed briefly.

  16. The Research of Optical Turbulence Model in Underwater Imaging System

    Directory of Open Access Journals (Sweden)

    Liying Sun

    2014-01-01

    Full Text Available In order to research the effect of turbulence on underwater imaging system and image restoration, the underwater turbulence model is simulated by computer fluid dynamics. This model is obtained in different underwater turbulence intensity, which contains the pressure data that influences refractive index distribution. When the pressure value is conversed to refractive index, the refractive index distribution can be received with the refraction formula. In the condition of same turbulent intensity, the distribution of refractive index presents gradient in the whole region, with disorder and mutations in the local region. With the turbulence intensity increase, the holistic variation of the refractive index in the image is larger, and the refractive index change more tempestuously in the local region. All the above are illustrated by the simulation results with he ray tracing method and turbulent refractive index model. According to different turbulence intensity analysis, it is proved that turbulence causes image distortion and increases noise.

  17. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  18. Evaluation of HVS models in the application of medical image quality assessment

    Science.gov (United States)

    Zhang, L.; Cavaro-Menard, C.; Le Callet, P.

    2012-03-01

    In this study, four of the most widely used Human Visual System (HVS) models are applied on Magnetic Resonance (MR) images for signal detection task. Their performances are evaluated against gold standard derived from radiologists' majority decision. The task-based image quality assessment requires taking into account the human perception specificities, for which various HVS models have been proposed. However to our knowledge, no work was conducted to evaluate and compare the suitability of these models with respect to the assessment of medical image qualities. This pioneering study investigates the performances of different HVS models on medical images in terms of approximation to radiologist performance. We propose to score the performance of each HVS model using the AUC (Area Under the receiver operating characteristic Curve) and its variance estimate as the figure of merit. The radiologists' majority decision is used as gold standard so that the estimated AUC measures the distance between the HVS model and the radiologist perception. To calculate the variance estimate of AUC, we adopted the one-shot method that is independent of the HVS model's output range. The results of this study will help to provide arguments to the application of some HVS model on our future medical image quality assessment metric.

  19. Multi-object segmentation framework using deformable models for medical imaging analysis.

    Science.gov (United States)

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  20. Learning a generative model of images by factoring appearance and shape.

    Science.gov (United States)

    Le Roux, Nicolas; Heess, Nicolas; Shotton, Jamie; Winn, John

    2011-03-01

    Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system's extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.

  1. Continuous monitoring of arthritis in animal models using optical imaging modalities

    Science.gov (United States)

    Son, Taeyoon; Yoon, Hyung-Ju; Lee, Saseong; Jang, Won Seuk; Jung, Byungjo; Kim, Wan-Uk

    2014-10-01

    Given the several difficulties associated with histology, including difficulty in continuous monitoring, this study aimed to investigate the feasibility of optical imaging modalities-cross-polarization color (CPC) imaging, erythema index (EI) imaging, and laser speckle contrast (LSC) imaging-for continuous evaluation and monitoring of arthritis in animal models. C57BL/6 mice, used for the evaluation of arthritis, were divided into three groups: arthritic mice group (AMG), positive control mice group (PCMG), and negative control mice group (NCMG). Complete Freund's adjuvant, mineral oil, and saline were injected into the footpad for AMG, PCMG, and NCMG, respectively. LSC and CPC images were acquired from 0 through 144 h after injection for all groups. EI images were calculated from CPC images. Variations in feet area, EI, and speckle index for each mice group over time were calculated for quantitative evaluation of arthritis. Histological examinations were performed, and the results were found to be consistent with those from optical imaging analysis. Thus, optical imaging modalities may be successfully applied for continuous evaluation and monitoring of arthritis in animal models.

  2. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  3. Discrete gradient methods for solving variational image regularisation models

    International Nuclear Information System (INIS)

    Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B

    2017-01-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)

  4. Computationally-optimized bone mechanical modeling from high-resolution structural images.

    Directory of Open Access Journals (Sweden)

    Jeremy F Magland

    Full Text Available Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time, which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM. To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging.

  5. Evaluation of multimodality imaging using image fusion with ultrasound tissue elasticity imaging in an experimental animal model.

    Science.gov (United States)

    Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A

    2014-01-01

    To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a

  6. Ultrasonic modelling and imaging in dissimilar welds

    International Nuclear Information System (INIS)

    Shlivinski, A.; Langenberg, K.J.; Marklein, R.

    2004-01-01

    Non-destructive testing of defects in nuclear power plant dissimilar pipe weldings play an important part in safety inspections. Traditionally the imaging of such defects is performed using the synthetic aperture focusing technique (SAFT) algorithm, however since parts of the dissimilar welded structure are made of an anisotropic material, this algorithm may fail to produce correct results. Here we present a modified algorithm that enables a correct imaging of cracks in anisotropic and inhomogeneous complex structures by accounting for the true nature of the wave propagation in such structures, this algorithm is called inhomogeneous anisotropic SAFT (InASAFT). In InASAFT algorithm is shown to yield better results over the SAFT algorithm for complex environments. The InASAFT suffers, though, from the same difficulties of the SAFT algorithm, i.e. ''ghost'' images and lack of clear focused images. However these artefacts can be identified through numerical modelling of the wave propagation in the structure. (orig.)

  7. Ultrasonic modelling and imaging in dissimilar welds

    Energy Technology Data Exchange (ETDEWEB)

    Shlivinski, A.; Langenberg, K.J.; Marklein, R. [Dept. of Electrical Engineering, Univ. of Kassel, Kassel (Germany)

    2004-07-01

    Non-destructive testing of defects in nuclear power plant dissimilar pipe weldings play an important part in safety inspections. Traditionally the imaging of such defects is performed using the synthetic aperture focusing technique (SAFT) algorithm, however since parts of the dissimilar welded structure are made of an anisotropic material, this algorithm may fail to produce correct results. Here we present a modified algorithm that enables a correct imaging of cracks in anisotropic and inhomogeneous complex structures by accounting for the true nature of the wave propagation in such structures, this algorithm is called inhomogeneous anisotropic SAFT (InASAFT). In InASAFT algorithm is shown to yield better results over the SAFT algorithm for complex environments. The InASAFT suffers, though, from the same difficulties of the SAFT algorithm, i.e. ''ghost'' images and lack of clear focused images. However these artefacts can be identified through numerical modelling of the wave propagation in the structure. (orig.)

  8. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  9. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  10. Insights into Parkinson's disease models and neurotoxicity using non-invasive imaging

    International Nuclear Information System (INIS)

    Sanchez-Pernaute, Rosario; Brownell, Anna-Liisa; Jenkins, Bruce G.; Isacson, Ole

    2005-01-01

    Loss of dopamine in the nigrostriatal system causes a severe impairment in motor function in patients with Parkinson's disease and in experimental neurotoxic models of the disease. We have used non-invasive imaging techniques such as positron emission tomography (PET) and functional magnetic resonance imaging (MRI) to investigate in vivo the changes in the dopamine system in neurotoxic models of Parkinson's disease. In addition to classic neurotransmitter studies, in these models, it is also possible to characterize associated and perhaps pathogenic factors, such as the contribution of microglia activation and inflammatory responses to neuronal damage. Functional imaging techniques are instrumental to our understanding and modeling of disease mechanisms, which should in turn lead to development of new therapies for Parkinson's disease and other neurodegenerative disorders

  11. Parallel imaging enhanced MR colonography using a phantom model.

    LENUS (Irish Health Repository)

    Morrin, Martina M

    2008-09-01

    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.

  12. Modelling of chromatic contrast for retrieval of wallpaper images

    OpenAIRE

    Gao, Xiaohong W.; Wang, Yuanlei; Qian, Yu; Gao, Alice

    2015-01-01

    Colour remains one of the key factors in presenting an object and consequently has been widely applied in retrieval of images based on their visual contents. However, a colour appearance changes with the change of viewing surroundings, the phenomenon that has not been paid attention yet while performing colour-based image retrieval. To comprehend this effect, in this paper, a chromatic contrast model, CAMcc, is developed for the application of retrieval of colour intensive images, cementing t...

  13. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  14. Terrestrial magnetospheric imaging: Numerical modeling of low energy neutral atoms

    International Nuclear Information System (INIS)

    Moore, K.R.; Funsten, H.O.; McComas, D.J.; Scime, E.E.; Thomsen, M.F.

    1993-01-01

    Imaging of the terrestrial magnetosphere can be performed by detection of low energy neutral atoms (LENAs) that are produced by charge exchange between magnetospheric plasma ions and cold neutral atoms of the Earth's geocorona. As a result of recent instrumentation advances it is now feasible to make energy-resolved measurements of LENAs from less than I key to greater than 30 key. To model expected LENA fluxes at a spacecraft, we initially used a simplistic, spherically symmetric magnetospheric plasma model. 6 We now present improved calculations of both hydrogen and oxygen line-of-sight LENA fluxes expected on orbit for various plasma regimes as predicted by the Rice University Magnetospheric Specification Model. We also estimate expected image count rates based on realistic instrument geometric factors, energy passbands, and image accumulation intervals. The results indicate that presently proposed LENA instruments are capable of imaging of storm time ring current and potentially even quiet time ring current fluxes, and that phenomena such as ion injections from the tail and subsequent drifts toward the dayside magnetopause may also be deduced

  15. Model-based VQ for image data archival, retrieval and distribution

    Science.gov (United States)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  16. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  17. Unified and Modular Modeling and Functional Verification Framework of Real-Time Image Signal Processors

    Directory of Open Access Journals (Sweden)

    Abhishek Jain

    2016-01-01

    Full Text Available In VLSI industry, image signal processing algorithms are developed and evaluated using software models before implementation of RTL and firmware. After the finalization of the algorithm, software models are used as a golden reference model for the image signal processor (ISP RTL and firmware development. In this paper, we are describing the unified and modular modeling framework of image signal processing algorithms used for different applications such as ISP algorithms development, reference for hardware (HW implementation, reference for firmware (FW implementation, and bit-true certification. The universal verification methodology- (UVM- based functional verification framework of image signal processors using software reference models is described. Further, IP-XACT based tools for automatic generation of functional verification environment files and model map files are described. The proposed framework is developed both with host interface and with core using virtual register interface (VRI approach. This modeling and functional verification framework is used in real-time image signal processing applications including cellphone, smart cameras, and image compression. The main motivation behind this work is to propose the best efficient, reusable, and automated framework for modeling and verification of image signal processor (ISP designs. The proposed framework shows better results and significant improvement is observed in product verification time, verification cost, and quality of the designs.

  18. Registered error between PET and CT images confirmed by a water model

    International Nuclear Information System (INIS)

    Chen Yangchun; Fan Mingwu; Xu Hao; Chen Ping; Zhang Chunlin

    2012-01-01

    The registered error between PET and CT imaging system was confirmed by a water model simulating clinical cases. A barrel of 6750 mL was filled with 59.2 MBq [ 18 F]-FDG and scanned after 80 min by 2 dimension model PET/CT. The CT images were used to attenuate the PET images. The CT/PET images were obtained by image morphological processing analyses without barrel wall. The relationship of the water image centroids of CT and PET images was established by linear regression analysis, and the registered error between PET and CT image could be computed one slice by one slice. The alignment program was done 4 times following the protocol given by GE Healthcare. Compared with centroids of water CT images, centroids of PET images were shifted to X-axis (0.011slice+0.63) mm, to Y-axis (0.022×slice+1.35) mm. To match CT images, PET images should be translated along X-axis (-2.69±0.15) mm, Y-axis (0.43±0.11) mm, Z-axis (0.86±0.23) mm, and X-axis be rotated by (0.06±0.07)°, Y-axis by (-0.01±0.08)°, and Z-axis by (0.11±0.07)°. So, the systematic registered error was not affected by load and its distribution. By finding the registered error between PET and CT images for coordinate rotation random error, the water model could confirm the registered results of PET-CT system corrected by Alignment parameters. (authors)

  19. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  20. Tracking boundary movement and exterior shape modelling in lung EIT imaging

    International Nuclear Information System (INIS)

    Biguri, A; Soleimani, M; Grychtol, B; Adler, A

    2015-01-01

    Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT. (paper)

  1. Tracking boundary movement and exterior shape modelling in lung EIT imaging.

    Science.gov (United States)

    Biguri, A; Grychtol, B; Adler, A; Soleimani, M

    2015-06-01

    Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT.

  2. Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-07-01

    Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.

  3. Functional Brain Imaging Synthesis Based on Image Decomposition and Kernel Modeling: Application to Neurodegenerative Diseases

    Directory of Open Access Journals (Sweden)

    Francisco J. Martinez-Murcia

    2017-11-01

    Full Text Available The rise of neuroimaging in research and clinical practice, together with the development of new machine learning techniques has strongly encouraged the Computer Aided Diagnosis (CAD of different diseases and disorders. However, these algorithms are often tested in proprietary datasets to which the access is limited and, therefore, a direct comparison between CAD procedures is not possible. Furthermore, the sample size is often small for developing accurate machine learning methods. Multi-center initiatives are currently a very useful, although limited, tool in the recruitment of large populations and standardization of CAD evaluation. Conversely, we propose a brain image synthesis procedure intended to generate a new image set that share characteristics with an original one. Our system focuses on nuclear imaging modalities such as PET or SPECT brain images. We analyze the dataset by applying PCA to the original dataset, and then model the distribution of samples in the projected eigenbrain space using a Probability Density Function (PDF estimator. Once the model has been built, we can generate new coordinates on the eigenbrain space belonging to the same class, which can be then projected back to the image space. The system has been evaluated on different functional neuroimaging datasets assessing the: resemblance of the synthetic images with the original ones, the differences between them, their generalization ability and the independence of the synthetic dataset with respect to the original. The synthetic images maintain the differences between groups found at the original dataset, with no significant differences when comparing them to real-world samples. Furthermore, they featured a similar performance and generalization capability to that of the original dataset. These results prove that these images are suitable for standardizing the evaluation of CAD pipelines, and providing data augmentation in machine learning systems -e.g. in deep

  4. Imaging and Modeling Laboratory in Neurobiology and Oncology - IMNC. Activity report 2008-2012

    International Nuclear Information System (INIS)

    Charon, Yves; Arlaud, Nathalie; Mastrippolito, Roland

    2014-09-01

    The Imaging and Modeling Laboratory in Neurobiology and Oncology (IMNC) is an interdisciplinary unit shared between the Paris-Sud and Paris-Diderot universities and the National Institute of Nuclear and particle physics (IN2P3). Created in January 2006, the laboratory activities are structured around three main topics: the clinical and pre-clinical multi-modal imaging (optical and isotopic), the modeling of tumoral processes, and radiotherapy. This report presents the activities of the laboratory during the years 2008-2012: 1 - Forewords; 2 - Highlights; 3 - Research teams: Small animal imaging; Metabolism, imaging and olfaction; Surgery imaging in oncology; Quantification in molecular imaging; Modeling of biological systems; 4 - Technical innovations: Instrumentation, Scientific calculation, Biology department, valorisation and open-source softwares; 5 - Publications; 6 - Scientific life, communication and teaching activities; 7 - Laboratory operation; 8 - Perspectives

  5. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  6. Digital image technology and a measurement tool in physical models

    CSIR Research Space (South Africa)

    Phelp, David

    2006-05-01

    Full Text Available Advances in digital image technology has allowed us to use accurate, but relatively cost effective technology to measure a number of varied activities in physical models. The capturing and manipulation of high resolution digital images can be used...

  7. Innovative biomagnetic imaging sensors for breast cancer: A model-based study

    International Nuclear Information System (INIS)

    Deng, Y.; Golkowski, M.

    2012-01-01

    Breast cancer is a serious potential health problem for all women and is the second leading cause of cancer deaths in the United States. The current screening procedures and imaging techniques, including x-ray mammography, clinical biopsy, ultrasound imaging, and magnetic resonance imaging, provide only 73% accuracy in detecting breast cancer. This gives the impetus to explore alternate techniques for imaging the breast and detecting early stage tumors. Among the complementary methods, the noninvasive biomagnetic breast imaging is attractive and promising, because both ionizing radiation and breast compressions that the prevalent x-ray mammography suffers from are avoided. It furthermore offers very high contrast because of the significant electromagnetic properties' differences between the cancerous, benign, and normal breast tissues. In this paper, a hybrid and accurate modeling tool for biomagnetic breast imaging is developed, which couples the electromagnetic and ultrasonic energies, and initial validations between the model predication and experimental findings are conducted.

  8. Projection model for flame chemiluminescence tomography based on lens imaging

    Science.gov (United States)

    Wan, Minggang; Zhuang, Jihui

    2018-04-01

    For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.

  9. Relationship model among sport event image, destination image, and tourist satisfaction of Tour de Singkarak in West Sumatera

    Directory of Open Access Journals (Sweden)

    Ratni Prima Lita

    2015-06-01

    Full Text Available Sport events Tour de Singkarak (TDS can increase tourist arrivals to West Sumatera. At least at the time of execution, the majority of participants and team supporters (sports tourist brings the families. Although there are claims about the arrival of tourists, it requires to see the impact of sports events TDS and comprehensive long-term basis to the West Sumatera image as a tourist destination (destination image and its impact on tourist satisfaction. This study re-conceptualizes the interconnec-tedness among sport event image, tourist destination image, perception and the effect on tourists satisfaction. The investigation on this interconnection is expected to reveal empirically tested model. As an explanatory in nature, this study uses explanatory survey and cross sectional data. In total of 100 spectators of Tour de Singkarak in West Sumatera, they got involved in survey and they were taken by convenience sam-pling technique. Analysis of this data was done by using variance based structural equation modeling. It was found that sport event image and destination image signifi-cantly affect the satisfaction of spectators of Tour de Singkarak.

  10. Polarimetric SAR image classification based on discriminative dictionary learning model

    Science.gov (United States)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  11. Cognitive model of image interpretation for artificial intelligence applications

    International Nuclear Information System (INIS)

    Raju, S.

    1988-01-01

    A cognitive model of imaging diagnosis was devised to aid in the development of expert systems that assist in the interpretation of diagnostic images. In this cognitive model, a small set of observations that are strongly predictive of a particular diagnosis lead to a search for other observations that would support this diagnosis but are not necessarily specific for it. Then a set of alternative diagnoses is considered. This is followed by a search for observations that might allow differentiation of the primary diagnostic consideration from the alternatives. The production rules needed to implement this model can be classified into three major categories, each of which have certain general characteristics. Knowledge of these characteristics simplifies the development of these expert systems

  12. Textured digital elevation model formation from low-cost UAV LADAR/digital image data

    Science.gov (United States)

    Bybee, Taylor C.; Budge, Scott E.

    2015-05-01

    Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.

  13. Registration of eye reflection and scene images using an aspherical eye model.

    Science.gov (United States)

    Nakazawa, Atsushi; Nitschke, Christian; Nishida, Toyoaki

    2016-11-01

    This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios.

  14. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  15. Superresolution Interferometric Imaging with Sparse Modeling Using Total Squared Variation: Application to Imaging the Black Hole Shadow

    Science.gov (United States)

    Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki

    2018-05-01

    We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.

  16. Human visual modeling and image deconvolution by linear filtering

    International Nuclear Information System (INIS)

    Larminat, P. de; Barba, D.; Gerber, R.; Ronsin, J.

    1978-01-01

    The problem is the numerical restoration of images degraded by passing through a known and spatially invariant linear system, and by the addition of a stationary noise. We propose an improvement of the Wiener's filter to allow the restoration of such images. This improvement allows to reduce the important drawbacks of classical Wiener's filter: the voluminous data processing, the lack of consideration of the vision's characteristivs which condition the perception by the observer of the restored image. In a first paragraph, we describe the structure of the visual detection system and a modelling method of this system. In the second paragraph we explain a restoration method by Wiener filtering that takes the visual properties into account and that can be adapted to the local properties of the image. Then the results obtained on TV images or scintigrams (images obtained by a gamma-camera) are commented [fr

  17. Pseudorandom numbers: evolutionary models in image processing, biology, and nonlinear dynamic systems

    Science.gov (United States)

    Yaroslavsky, Leonid P.

    1996-11-01

    We show that one can treat pseudo-random generators, evolutionary models of texture images, iterative local adaptive filters for image restoration and enhancement and growth models in biology and material sciences in a unified way as special cases of dynamic systems with a nonlinear feedback.

  18. Construction of tomographic head model using sectioned photographic images of cadaver

    International Nuclear Information System (INIS)

    Lee, Choon Sik; Lee, Jai Ki; Park, Jin Seo; Chung, Min Suk

    2004-01-01

    Tomographic models are currently the most complete, developed and realistic models of the human anatomy. They have been used to estimate organ doses for diagnostic radiation examination and radiotherapy treatment planning, and radiation protection. The quality of original anatomic images is a key factor to build a quality tomographic model. Computed tomography (CT) and magnetic resonance imaging (MRI) scan, from which most of current tomographic models are constructed, have their inherent shortcomings. In this study, a tomographic model of Korean adult male head was constructed by using serially sectioned photographs of cadaver. The cadaver was embedded, frozen, serially sectioned and photographed by high resolution digital camera at 0.2 mm interval. The contours of organs and tissues in photographs were segmented by several trained anatomists. The 120 segmented images of head at 2mm interval were converted into binary files and ported into Monte Carlo code to perform an example calculation of organ dose. Whole body tomographic model will be constructed by using the procedure developed in this study

  19. Probabilistic image processing by means of the Bethe approximation for the Q-Ising model

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Inoue, Jun-ichi; Titterington, D M

    2003-01-01

    The framework of Bayesian image restoration for multi-valued images by means of the Q-Ising model with nearest-neighbour interactions is presented. Hyperparameters in the probabilistic model are determined so as to maximize the marginal likelihood. A practical algorithm is described for multi-valued image restoration based on the Bethe approximation. The algorithm corresponds to loopy belief propagation in artificial intelligence. We conclude that, in real world grey-level images, the Q-Ising model can give us good results

  20. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    International Nuclear Information System (INIS)

    Rolison, L; Samant, S; Baciak, J; Jordan, K

    2016-01-01

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  1. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rolison, L; Samant, S; Baciak, J; Jordan, K [University of Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  2. Lévy-based modelling in brain imaging

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Rønn-Nielsen, Anders; Mouridsen, Kim

    2013-01-01

    example of magnetic resonance imaging scans that are non-Gaussian. For these data, simulations under the fitted models show that traditional methods based on Gaussian random field theory may leave small, but significant changes in signal level undetected, while these changes are detectable under a non...

  3. A spinal cord window chamber model for in vivo longitudinal multimodal optical and acoustic imaging in a murine model.

    Directory of Open Access Journals (Sweden)

    Sarah A Figley

    Full Text Available In vivo and direct imaging of the murine spinal cord and its vasculature using multimodal (optical and acoustic imaging techniques could significantly advance preclinical studies of the spinal cord. Such intrinsically high resolution and complementary imaging technologies could provide a powerful means of quantitatively monitoring changes in anatomy, structure, physiology and function of the living cord over time after traumatic injury, onset of disease, or therapeutic intervention. However, longitudinal in vivo imaging of the intact spinal cord in rodent models has been challenging, requiring repeated surgeries to expose the cord for imaging or sacrifice of animals at various time points for ex vivo tissue analysis. To address these limitations, we have developed an implantable spinal cord window chamber (SCWC device and procedures in mice for repeated multimodal intravital microscopic imaging of the cord and its vasculature in situ. We present methodology for using our SCWC to achieve spatially co-registered optical-acoustic imaging performed serially for up to four weeks, without damaging the cord or induction of locomotor deficits in implanted animals. To demonstrate the feasibility, we used the SCWC model to study the response of the normal spinal cord vasculature to ionizing radiation over time using white light and fluorescence microscopy combined with optical coherence tomography (OCT in vivo. In vivo power Doppler ultrasound and photoacoustics were used to directly visualize the cord and vascular structures and to measure hemoglobin oxygen saturation through the complete spinal cord, respectively. The model was also used for intravital imaging of spinal micrometastases resulting from primary brain tumor using fluorescence and bioluminescence imaging. Our SCWC model overcomes previous in vivo imaging challenges, and our data provide evidence of the broader utility of hybridized optical-acoustic imaging methods for obtaining

  4. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  5. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  6. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    Science.gov (United States)

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  7. Monte Carlo modeling of neutron imaging at the SINQ spallation source

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.; Lehmann, E.H.; Pitcher, E.J.; McKinney, G.W.

    2003-01-01

    Modeling of the Swiss Spallation Neutron Source (SINQ) has been used to demonstrate the neutron radiography capability of the newly released MPI-version of the MCNPX Monte Carlo code. A detailed MCNPX model was developed of SINQ and its associated neutron transmission radiography (NEUTRA) facility. Preliminary validation of the model was performed by comparing the calculated and measured neutron fluxes in the NEUTRA beam line, and a simulated radiography image was generated for a sample consisting of steel tubes containing different materials. This paper describes the SINQ facility, provides details of the MCNPX model, and presents preliminary results of the neutron imaging. (authors)

  8. Rapid anatomical brain imaging using spiral acquisition and an expanded signal model.

    Science.gov (United States)

    Kasper, Lars; Engel, Maria; Barmet, Christoph; Haeberlin, Maximilian; Wilm, Bertram J; Dietrich, Benjamin E; Schmid, Thomas; Gross, Simon; Brunner, David O; Stephan, Klaas E; Pruessmann, Klaas P

    2018-03-01

    We report the deployment of spiral acquisition for high-resolution structural imaging at 7T. Long spiral readouts are rendered manageable by an expanded signal model including static off-resonance and B 0 dynamics along with k-space trajectories and coil sensitivity maps. Image reconstruction is accomplished by inversion of the signal model using an extension of the iterative non-Cartesian SENSE algorithm. Spiral readouts up to 25 ms are shown to permit whole-brain 2D imaging at 0.5 mm in-plane resolution in less than a minute. A range of options is explored, including proton-density and T 2 * contrast, acceleration by parallel imaging, different readout orientations, and the extraction of phase images. Results are shown to exhibit competitive image quality along with high geometric consistency. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  10. Unified Probabilistic Models for Face Recognition from a Single Example Image per Person

    Institute of Scientific and Technical Information of China (English)

    Pin Liao; Li Shen

    2004-01-01

    This paper presents a new technique of unified probabilistic models for face recognition from only one single example image per person. The unified models, trained on an obtained training set with multiple samples per person, are used to recognize facial images from another disjoint database with a single sample per person. Variations between facial images are modeled as two unified probabilistic models: within-class variations and between-class variations. Gaussian Mixture Models are used to approximate the distributions of the two variations and exploit a classifier combination method to improve the performance. Extensive experimental results on the ORL face database and the authors' database (the ICT-JDL database) including totally 1,750facial images of 350 individuals demonstrate that the proposed technique, compared with traditional eigenface method and some well-known traditional algorithms, is a significantly more effective and robust approach for face recognition.

  11. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    Science.gov (United States)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  12. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  13. Apoptosis imaging studies in various animal models using radio-iodinated peptide.

    Science.gov (United States)

    Kwak, Wonjung; Ha, Yeong Su; Soni, Nisarg; Lee, Woonghee; Park, Se-Il; Ahn, Heesu; An, Gwang Il; Kim, In-San; Lee, Byung-Heon; Yoo, Jeongsoo

    2015-01-01

    Apoptosis has a role in many medical disorders and treatments; hence, its non-invasive evaluation is one of the most riveting research topics. Currently annexin V is used as gold standard for imaging apoptosis. However, several drawbacks, including high background, slow body clearance, make it a suboptimum marker for apoptosis imaging. In this study, we radiolabeled the recently identified histone H1 targeting peptide (ApoPep-1) and evaluated its potential as a new apoptosis imaging agent in various animal models. ApoPep-1 (CQRPPR) was synthesized, and an extra tyrosine residue was added to its N-terminal end for radiolabeling. This peptide was radiolabeled with (124)I and (131)I and was tested for its serum stability. Surgery- and drug-induced apoptotic rat models were prepared for apoptosis evaluation, and PET imaging was performed. Doxorubicin was used for xenograft tumor treatment in mice, and the induced apoptosis was studied. Tumor metabolism and proliferation were assessed by [(18)F]FDG and [(18)F]FLT PET imaging and compared with ApoPep-1 after doxorubicin treatment. The peptide was radiolabeled at high purity, and it showed reasonably good stability in serum. Cell death was easily imaged by radiolabeled ApoPep-1 in an ischemia surgery model. And, liver apoptosis was more clearly identified by ApoPep-1 rather than [(124)I]annexin V in cycloheximide-treated models. Three doxorubicin doses inhibited tumor growth, which was evaluated by 30-40% decreases of [(18)F]FDG and [(18)F]FLT PET uptake in the tumor area. However, ApoPep-1 demonstrated more than 200% increase in tumor uptake after chemotherapy, while annexin V did not show any meaningful uptake in the tumor compared with the background. Biodistribution data were also in good agreement with the microPET imaging results. All of the experimental data clearly demonstrated high potential of the radiolabeled ApoPep-1 for in vivo apoptosis imaging.

  14. Yoga and positive body image: A test of the Embodiment Model.

    Science.gov (United States)

    Mahlo, Leeann; Tiggemann, Marika

    2016-09-01

    The study aimed to test the Embodiment Model of Positive Body Image (Menzel & Levine, 2011) within the context of yoga. Participants were 193 yoga practitioners (124 Iyengar, 69 Bikram) and 127 university students (non-yoga participants) from Adelaide, South Australia. Participants completed questionnaire measures of positive body image, embodiment, self-objectification, and desire for thinness. Results showed yoga practitioners scored higher on positive body image and embodiment, and lower on self-objectification than non-yoga participants. In support of the embodiment model, the relationship between yoga participation and positive body image was serially mediated by embodiment and reduced self-objectification. Although Bikram practitioners endorsed appearance-related reasons for participating in yoga more than Iyengar practitioners, there were no significant differences between Iyengar and Bikram yoga practitioners on body image variables. It was concluded that yoga is an embodying activity that can provide women with the opportunity to cultivate a favourable relationship with their body. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines

    Directory of Open Access Journals (Sweden)

    Ibrahim Baz

    2008-04-01

    Full Text Available This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction, for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD, indicated that the model can successfully vectorize the specified raster data quickly and accurately.

  16. Image-based Modeling of PSF Deformation with Application to Limited Angle PET Data

    Science.gov (United States)

    Matej, Samuel; Li, Yusheng; Panetta, Joseph; Karp, Joel S.; Surti, Suleman

    2016-01-01

    The point-spread-functions (PSFs) of reconstructed images can be deformed due to detector effects such as resolution blurring and parallax error, data acquisition geometry such as insufficient sampling or limited angular coverage in dual-panel PET systems, or reconstruction imperfections/simplifications. PSF deformation decreases quantitative accuracy and its spatial variation lowers consistency of lesion uptake measurement across the imaging field-of-view (FOV). This can be a significant problem with dual panel PET systems even when using TOF data and image reconstruction models of the detector and data acquisition process. To correct for the spatially variant reconstructed PSF distortions we propose to use an image-based resolution model (IRM) that includes such image PSF deformation effects. Originally the IRM was mostly used for approximating data resolution effects of standard PET systems with full angular coverage in a computationally efficient way, but recently it was also used to mitigate effects of simplified geometric projectors. Our work goes beyond this by including into the IRM reconstruction imperfections caused by combination of the limited angle, parallax errors, and any other (residual) deformation effects and testing it for challenging dual panel data with strongly asymmetric and variable PSF deformations. We applied and tested these concepts using simulated data based on our design for a dedicated breast imaging geometry (B-PET) consisting of dual-panel, time-of-flight (TOF) detectors. We compared two image-based resolution models; i) a simple spatially invariant approximation to PSF deformation, which captures only the general PSF shape through an elongated 3D Gaussian function, and ii) a spatially variant model using a Gaussian mixture model (GMM) to more accurately capture the asymmetric PSF shape in images reconstructed from data acquired with the B-PET scanner geometry. Results demonstrate that while both IRMs decrease the overall uptake

  17. Post-modelling of images from a laser-induced wavy boiling front

    Energy Technology Data Exchange (ETDEWEB)

    Matti, R.S., E-mail: ramiz.matti@ltu.se [Luleå University of Technology, Department of Engineering Sciences and Mathematics, SE-971 87 Luleå (Sweden); University of Mosul, College of Engineering, Department of Mechanical Engineering, Mosul (Iraq); Kaplan, A.F.H. [Luleå University of Technology, Department of Engineering Sciences and Mathematics, SE-971 87 Luleå (Sweden)

    2015-12-01

    Highlights: • New method: post-modelling of high speed images from a laser-induced front. • From the images a wavy cavity and its absorption distribution is calculated. • Histograms enable additional statistical analysis and understanding. • Despite the complex topology the absorptivity is bound to 35–43%. • The new method visualizes valuable complementary information. - Abstract: Processes like laser keyhole welding, remote fusion laser cutting or laser drilling are governed by a highly dynamic wavy boiling front that was recently recorded by ultra-high speed imaging. A new approach has now been established by post-modelling of the high speed images. Based on the image greyscale and on a cavity model the three-dimensional front topology is reconstructed. As a second step the Fresnel absorptivity modulation across the wavy front is calculated, combined with the local projection of the laser beam. Frequency polygons enable additional analysis of the statistical variations of the properties across the front. Trends like shadow formation and time dependency can be studied, locally and for the whole front. Despite strong topology modulation in space and time, for lasers with 1 μm wavelength and steel the absorptivity is bounded to a narrow range of 35–43%, owing to its Fresnel characteristics.

  18. Multi-Modal Imaging in a Mouse Model of Orthotopic Lung Cancer

    OpenAIRE

    Patel, Priya; Kato, Tatsuya; Ujiie, Hideki; Wada, Hironobu; Lee, Daiyoon; Hu, Hsin-pei; Hirohashi, Kentaro; Ahn, Jin Young; Zheng, Jinzi; Yasufuku, Kazuhiro

    2016-01-01

    Background Investigation of CF800, a novel PEGylated nano-liposomal imaging agent containing indocyanine green (ICG) and iohexol, for real-time near infrared (NIR) fluorescence and computed tomography (CT) image-guided surgery in an orthotopic lung cancer model in nude mice. Methods CF800 was intravenously administered into 13 mice bearing the H460 orthotopic human lung cancer. At 48 h post-injection (peak imaging agent accumulation time point), ex vivo NIR and CT imaging was performed. A cli...

  19. The Application of Use Case Modeling in Designing Medical Imaging Information Systems

    International Nuclear Information System (INIS)

    Safdari, Reza; Farzi, Jebraeil; Ghazisaeidi, Marjan; Mirzaee, Mahboobeh; Goodini, Azadeh

    2013-01-01

    Introduction. The essay at hand is aimed at examining the application of use case modeling in analyzing and designing information systems to support Medical Imaging services. Methods. The application of use case modeling in analyzing and designing health information systems was examined using electronic databases (Pubmed, Google scholar) resources and the characteristics of the modeling system and its effect on the development and design of the health information systems were analyzed. Results. Analyzing the subject indicated that Provident modeling of health information systems should provide for quick access to many health data resources in a way that patients' data can be used in order to expand distant services and comprehensive Medical Imaging advices. Also these experiences show that progress in the infrastructure development stages through gradual and repeated evolution process of user requirements is stronger and this can lead to a decline in the cycle of requirements engineering process in the design of Medical Imaging information systems. Conclusion. Use case modeling approach can be effective in directing the problems of health and Medical Imaging information systems towards understanding, focusing on the start and analysis, better planning, repetition, and control

  20. Use of an object model in three dimensional image reconstruction. Application in medical imaging

    International Nuclear Information System (INIS)

    Delageniere-Guillot, S.

    1993-02-01

    Threedimensional image reconstruction from projections corresponds to a set of techniques which give information on the inner structure of the studied object. These techniques are mainly used in medical imaging or in non destructive evaluation. Image reconstruction is an ill-posed problem. So the inversion has to be regularized. This thesis deals with the introduction of a priori information within the reconstruction algorithm. The knowledge is introduced through an object model. The proposed scheme is applied to the medical domain for cone beam geometry. We address two specific problems. First, we study the reconstruction of high contrast objects. This can be applied to bony morphology (bone/soft tissue) or to angiography (vascular structures opacified by injection of contrast agent). With noisy projections, the filtering steps of standard methods tend to smooth the natural transitions of the investigated object. In order to regularize the reconstruction but to keep contrast, we introduce a model of classes which involves the Markov random fields theory. We develop a reconstruction scheme: analytic reconstruction-reprojection. Then, we address the case of an object changing during the acquisition. This can be applied to angiography when the contrast agent is moving through the vascular tree. The problem is then stated as a dynamic reconstruction. We define an evolution AR model and we use an algebraic reconstruction method. We represent the object at a particular moment as an intermediary state between the state of the object at the beginning and at the end of the acquisition. We test both methods on simulated and real data, and we prove how the use of an a priori model can improve the results. (author)

  1. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  2. Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude

    Science.gov (United States)

    Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.

    2018-05-01

    One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.

  3. A novel modeling method for manufacturing hearing aid using 3D medical images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyeong Gyun [Dept of Radiological Science, Far East University, Eumseong (Korea, Republic of)

    2016-06-15

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape.

  4. A novel modeling method for manufacturing hearing aid using 3D medical images

    International Nuclear Information System (INIS)

    Kim, Hyeong Gyun

    2016-01-01

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape

  5. Conversion of a Surface Model of a Structure of Interest into a Volume Model for Medical Image Retrieval

    Directory of Open Access Journals (Sweden)

    Sarmad ISTEPHAN

    2015-06-01

    Full Text Available Volumetric medical image datasets contain vital information for noninvasive diagnosis, treatment planning and prognosis. However, direct and unlimited query of such datasets is hindered due to the unstructured nature of the imaging data. This study is a step towards the unlimited query of medical image datasets by focusing on specific Structures of Interest (SOI. A requirement in achieving this objective is having both the surface and volume models of the SOI. However, typically, only the surface model is available. Therefore, this study focuses on creating a fast method to convert a surface model to a volume model. Three methods (1D, 2D and 3D are proposed and evaluated using simulated and real data of Deep Perisylvian Area (DPSA within the human brain. The 1D method takes 80 msec for DPSA model; about 4 times faster than 2D method and 7.4 fold faster than 3D method, with over 97% accuracy. The proposed 1D method is feasible for surface to volume conversion in computer aided diagnosis, treatment planning and prognosis systems containing large amounts of unstructured medical images.

  6. Modeling & imaging of bioelectrical activity principles and applications

    CERN Document Server

    He, Bin

    2010-01-01

    Over the past several decades, much progress has been made in understanding the mechanisms of electrical activity in biological tissues and systems, and for developing non-invasive functional imaging technologies to aid clinical diagnosis of dysfunction in the human body. The book will provide full basic coverage of the fundamentals of modeling of electrical activity in various human organs, such as heart and brain. It will include details of bioelectromagnetic measurements and source imaging technologies, as well as biomedical applications. The book will review the latest trends in

  7. A data model and database for high-resolution pathology analytical image informatics.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming

  8. A data model and database for high-resolution pathology analytical image informatics

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2011-01-01

    Full Text Available Background: The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. Context: This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS, and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs. Aims: (1 Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2 Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. Settings and Design: The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole

  9. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    Science.gov (United States)

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  10. 3D/2D model-to-image registration by imitation learning for cardiac procedures.

    Science.gov (United States)

    Toth, Daniel; Miao, Shun; Kurzendorfer, Tanja; Rinaldi, Christopher A; Liao, Rui; Mansi, Tommaso; Rhode, Kawal; Mountney, Peter

    2018-05-12

    In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application. This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images. Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases. Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.

  11. Short-Term Solar Irradiance Forecasts Using Sky Images and Radiative Transfer Model

    Directory of Open Access Journals (Sweden)

    Juan Du

    2018-05-01

    Full Text Available In this paper, we propose a novel forecast method which addresses the difficulty in short-term solar irradiance forecasting that arises due to rapidly evolving environmental factors over short time periods. This involves the forecasting of Global Horizontal Irradiance (GHI that combines prediction sky images with a Radiative Transfer Model (RTM. The prediction images (up to 10 min ahead are produced by a non-local optical flow method, which is used to calculate the cloud motion for each pixel, with consecutive sky images at 1 min intervals. The Direct Normal Irradiance (DNI and the diffuse radiation intensity field under clear sky and overcast conditions obtained from the RTM are then mapped to the sky images. Through combining the cloud locations on the prediction image with the corresponding instance of image-based DNI and diffuse radiation intensity fields, the GHI can be quantitatively forecasted for time horizons of 1–10 min ahead. The solar forecasts are evaluated in terms of root mean square error (RMSE and mean absolute error (MAE in relation to in-situ measurements and compared to the performance of the persistence model. The results of our experiment show that GHI forecasts using the proposed method perform better than the persistence model.

  12. Cardiac magnetic source imaging based on current multipole model

    International Nuclear Information System (INIS)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  13. Monte Carlo modeling of neutron and gamma-ray imaging systems

    International Nuclear Information System (INIS)

    Hall, J.

    1996-04-01

    Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ''real world'' complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification

  14. Model-based imaging of cardiac electrical function in human atria

    Science.gov (United States)

    Modre, Robert; Tilg, Bernhard; Fischer, Gerald; Hanser, Friedrich; Messnarz, Bernd; Schocke, Michael F. H.; Kremser, Christian; Hintringer, Florian; Roithinger, Franz

    2003-05-01

    Noninvasive imaging of electrical function in the human atria is attained by the combination of data from electrocardiographic (ECG) mapping and magnetic resonance imaging (MRI). An anatomical computer model of the individual patient is the basis for our computer-aided diagnosis of cardiac arrhythmias. Three patients suffering from Wolff-Parkinson-White syndrome, from paroxymal atrial fibrillation, and from atrial flutter underwent an electrophysiological study. After successful treatment of the cardiac arrhythmia with invasive catheter technique, pacing protocols with stimuli at several anatomical sites (coronary sinus, left and right pulmonary vein, posterior site of the right atrium, right atrial appendage) were performed. Reconstructed activation time (AT) maps were validated with catheter-based electroanatomical data, with invasively determined pacing sites, and with pacing at anatomical markers. The individual complex anatomical model of the atria of each patient in combination with a high-quality mesh optimization enables accurate AT imaging, resulting in a localization error for the estimated pacing sites within 1 cm. Our findings may have implications for imaging of atrial activity in patients with focal arrhythmias.

  15. Heterogeneous Breast Phantom Development for Microwave Imaging Using Regression Models

    Directory of Open Access Journals (Sweden)

    Camerin Hahn

    2012-01-01

    Full Text Available As new algorithms for microwave imaging emerge, it is important to have standard accurate benchmarking tests. Currently, most researchers use homogeneous phantoms for testing new algorithms. These simple structures lack the heterogeneity of the dielectric properties of human tissue and are inadequate for testing these algorithms for medical imaging. To adequately test breast microwave imaging algorithms, the phantom has to resemble different breast tissues physically and in terms of dielectric properties. We propose a systematic approach in designing phantoms that not only have dielectric properties close to breast tissues but also can be easily shaped to realistic physical models. The approach is based on regression model to match phantom's dielectric properties with the breast tissue dielectric properties found in Lazebnik et al. (2007. However, the methodology proposed here can be used to create phantoms for any tissue type as long as ex vivo, in vitro, or in vivo tissue dielectric properties are measured and available. Therefore, using this method, accurate benchmarking phantoms for testing emerging microwave imaging algorithms can be developed.

  16. Power laws and inverse motion modelling: application to turbulence measurements from satellite images

    Directory of Open Access Journals (Sweden)

    Pablo D. Mininni

    2012-01-01

    Full Text Available In the context of tackling the ill-posed inverse problem of motion estimation from image sequences, we propose to introduce prior knowledge on flow regularity given by turbulence statistical models. Prior regularity is formalised using turbulence power laws describing statistically self-similar structure of motion increments across scales. The motion estimation method minimises the error of an image observation model while constraining second-order structure function to behave as a power law within a prescribed range. Thanks to a Bayesian modelling framework, the motion estimation method is able to jointly infer the most likely power law directly from image data. The method is assessed on velocity fields of 2-D or quasi-2-D flows. Estimation accuracy is first evaluated on a synthetic image sequence of homogeneous and isotropic 2-D turbulence. Results obtained with the approach based on physics of fluids outperform state-of-the-art. Then, the method analyses atmospheric turbulence using a real meteorological image sequence. Selecting the most likely power law model enables the recovery of physical quantities, which are of major interest for turbulence atmospheric characterisation. In particular, from meteorological images we are able to estimate energy and enstrophy fluxes of turbulent cascades, which are in agreement with previous in situ measurements.

  17. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    Science.gov (United States)

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  18. Perona Malik anisotropic diffusion model using Peaceman Rachford scheme on digital radiographic image

    International Nuclear Information System (INIS)

    Halim, Suhaila Abd; Razak, Rohayu Abd; Ibrahim, Arsmah; Manurung, Yupiter HP

    2014-01-01

    In image processing, it is important to remove noise without affecting the image structure as well as preserving all the edges. Perona Malik Anisotropic Diffusion (PMAD) is a PDE-based model which is suitable for image denoising and edge detection problems. In this paper, the Peaceman Rachford scheme is applied on PMAD to remove unwanted noise as the scheme is efficient and unconditionally stable. The capability of the scheme to remove noise is evaluated on several digital radiography weld defect images computed using MATLAB R2009a. Experimental results obtained show that the Peaceman Rachford scheme improves the image quality substantially well based on the Peak Signal to Noise Ratio (PSNR). The Peaceman Rachford scheme used in solving the PMAD model successfully removes unwanted noise in digital radiographic image

  19. Perona Malik anisotropic diffusion model using Peaceman Rachford scheme on digital radiographic image

    Energy Technology Data Exchange (ETDEWEB)

    Halim, Suhaila Abd; Razak, Rohayu Abd; Ibrahim, Arsmah [Center of Mathematics Studies, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, 40450 Shah Alam. Selangor DE (Malaysia); Manurung, Yupiter HP [Advanced Manufacturing Technology Excellence Center (AMTEx), Faculty of Mechanical Engineering, Universiti Teknologi MARA, 40450 Shah Alam. Selangor DE (Malaysia)

    2014-06-19

    In image processing, it is important to remove noise without affecting the image structure as well as preserving all the edges. Perona Malik Anisotropic Diffusion (PMAD) is a PDE-based model which is suitable for image denoising and edge detection problems. In this paper, the Peaceman Rachford scheme is applied on PMAD to remove unwanted noise as the scheme is efficient and unconditionally stable. The capability of the scheme to remove noise is evaluated on several digital radiography weld defect images computed using MATLAB R2009a. Experimental results obtained show that the Peaceman Rachford scheme improves the image quality substantially well based on the Peak Signal to Noise Ratio (PSNR). The Peaceman Rachford scheme used in solving the PMAD model successfully removes unwanted noise in digital radiographic image.

  20. BUILDING DETECTION USING AERIAL IMAGES AND DIGITAL SURFACE MODELS

    Directory of Open Access Journals (Sweden)

    J. Mu

    2017-05-01

    Full Text Available In this paper a method for building detection in aerial images based on variational inference of logistic regression is proposed. It consists of three steps. In order to characterize the appearances of buildings in aerial images, an effective bag-of-Words (BoW method is applied for feature extraction in the first step. In the second step, a classifier of logistic regression is learned using these local features. The logistic regression can be trained using different methods. In this paper we adopt a fully Bayesian treatment for learning the classifier, which has a number of obvious advantages over other learning methods. Due to the presence of hyper prior in the probabilistic model of logistic regression, approximate inference methods have to be applied for prediction. In order to speed up the inference, a variational inference method based on mean field instead of stochastic approximation such as Markov Chain Monte Carlo is applied. After the prediction, a probabilistic map is obtained. In the third step, a fully connected conditional random field model is formulated and the probabilistic map is used as the data term in the model. A mean field inference is utilized in order to obtain a binary building mask. A benchmark data set consisting of aerial images and digital surfaced model (DSM released by ISPRS for 2D semantic labeling is used for performance evaluation. The results demonstrate the effectiveness of the proposed method.

  1. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  2. Supervised variational model with statistical inference and its application in medical image segmentation.

    Science.gov (United States)

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  3. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  4. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    Science.gov (United States)

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  5. The monocular visual imaging technology model applied in the airport surface surveillance

    Science.gov (United States)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  6. Model-Based Photoacoustic Image Reconstruction using Compressed Sensing and Smoothed L0 Norm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    Photoacoustic imaging (PAI) is a novel medical imaging modality that uses the advantages of the spatial resolution of ultrasound imaging and the high contrast of pure optical imaging. Analytical algorithms are usually employed to reconstruct the photoacoustic (PA) images as a result of their simple implementation. However, they provide a low accurate image. Model-based (MB) algorithms are used to improve the image quality and accuracy while a large number of transducers and data acquisition a...

  7. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  8. Moving object detection using dynamic motion modelling from UAV aerial images.

    Science.gov (United States)

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  9. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    Science.gov (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  10. Single-shot spiral imaging enabled by an expanded encoding model: Demonstration in diffusion MRI.

    Science.gov (United States)

    Wilm, Bertram J; Barmet, Christoph; Gross, Simon; Kasper, Lars; Vannesjo, S Johanna; Haeberlin, Max; Dietrich, Benjamin E; Brunner, David O; Schmid, Thomas; Pruessmann, Klaas P

    2017-01-01

    The purpose of this work was to improve the quality of single-shot spiral MRI and demonstrate its application for diffusion-weighted imaging. Image formation is based on an expanded encoding model that accounts for dynamic magnetic fields up to third order in space, nonuniform static B 0 , and coil sensitivity encoding. The encoding model is determined by B 0 mapping, sensitivity mapping, and concurrent field monitoring. Reconstruction is performed by iterative inversion of the expanded signal equations. Diffusion-tensor imaging with single-shot spiral readouts is performed in a phantom and in vivo, using a clinical 3T instrument. Image quality is assessed in terms of artefact levels, image congruence, and the influence of the different encoding factors. Using the full encoding model, diffusion-weighted single-shot spiral imaging of high quality is accomplished both in vitro and in vivo. Accounting for actual field dynamics, including higher orders, is found to be critical to suppress blurring, aliasing, and distortion. Enhanced image congruence permitted data fusion and diffusion tensor analysis without coregistration. Use of an expanded signal model largely overcomes the traditional vulnerability of spiral imaging with long readouts. It renders single-shot spirals competitive with echo-planar readouts and thus deploys shorter echo times and superior readout efficiency for diffusion imaging and further prospective applications. Magn Reson Med 77:83-91, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    International Nuclear Information System (INIS)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  12. Mammogram synthesis using a 3D simulation. I. Breast tissue model and image acquisition simulation

    International Nuclear Information System (INIS)

    Bakic, Predrag R.; Albert, Michael; Brzakovic, Dragana; Maidment, Andrew D. A.

    2002-01-01

    A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the x-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom

  13. Median Filter Noise Reduction of Image and Backpropagation Neural Network Model for Cervical Cancer Classification

    Science.gov (United States)

    Wutsqa, D. U.; Marwah, M.

    2017-06-01

    In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.

  14. STUDY ON MODELING AND VISUALIZING THE POSITIONAL UNCERTAINTY OF REMOTE SENSING IMAGE

    Directory of Open Access Journals (Sweden)

    W. Jiao

    2016-06-01

    Full Text Available It is inevitable to bring about uncertainty during the process of data acquisition. The traditional method to evaluate the geometric positioning accuracy is usually by the statistical method and represented by the root mean square errors (RMSEs of control points. It is individual and discontinuous, so it is difficult to describe the error spatial distribution. In this paper the error uncertainty of each control point is deduced, and the uncertainty spatial distribution model of each arbitrary point is established. The error model is proposed to evaluate the geometric accuracy of remote sensing image. Then several visualization methods are studied to represent the discrete and continuous data of geometric uncertainties. The experiments show that the proposed evaluation method of error distribution model compared with the traditional method of RMSEs can get the similar results but without requiring the user to collect control points as checkpoints, and error distribution information calculated by the model can be provided to users along with the geometric image data. Additionally, the visualization methods described in this paper can effectively and objectively represents the image geometric quality, and also can help users probe the reasons of bringing the image uncertainties in some extent.

  15. A SEMIAUTOMATIC APPROACH FOR GENERATION OF SITE MODELS FROM CARTOSAT-2 MULTIVIEW IMAGES

    Directory of Open Access Journals (Sweden)

    A. Mahapatra

    2012-07-01

    Full Text Available In the last decade there has been a paradigm shift in creating, viewing and utilizing geospatial data for planning, navigation and traffic management of urban areas. Realistic, three-dimensional information is preferred over conventional two dimensional maps. The paper describes objectives, methodology and results of an operational system being developed for generation of site model from Cartosat-2 multiview images. The system is designed to work in operational mode with varying level of manual interactivity. A rigorous physical sensor model based on collinearity condition models the "step n stare" mode of image acquisition of the satellite. The relative orientation of the overlapping images is achieved using coplanarity condition and conjugate points. A procedure is developed to perform digitization in mono and stereo modes. A technique for refining manually digitized boundaries is developed. The conjugate points are generated by establishing a correspondence between the points obtained on refined edges to analogous points on the images obtained with view angles ±26 deg. It is achieved through geometrically constrained image matching method. The results are shown for a portion of multi-view images of Washington City obtained from Cartosat-2. The scheme is generic to accept very high resolution stereo images from other satellites as input.

  16. Use of a model for 3D image reconstruction

    International Nuclear Information System (INIS)

    Delageniere, S.; Grangeat, P.

    1991-01-01

    We propose a software for 3D image reconstruction in transmission tomography. This software is based on the use of a model and of the RADON algorithm developed at LETI. The introduction of a markovian model helps us to enhance contrast and straitened the natural transitions existing in the objects studied, whereas standard transform methods smoothe them

  17. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    Science.gov (United States)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  18. High-Resolution Longitudinal Screening with Magnetic Resonance Imaging in a Murine Brain Cancer Model

    Directory of Open Access Journals (Sweden)

    Nicholas A. Bock

    2003-11-01

    Full Text Available One of the main limitations of intracranial models of diseases is our present inability to monitor and evaluate the intracranial compartment noninvasively over time. Therefore, there is a growing need for imaging modalities that provide thorough neuropathological evaluations of xenograft and transgenic models of intracranial pathology. In this study, we have established protocols for multiple-mouse magnetic resonance imaging (MRI to follow the growth and behavior of intracranial xenografts of gliomas longitudinally. We successfully obtained weekly images on 16 mice for a total of 5 weeks on a 7-T multiple-mouse MRI. T2- and Ti-weighted imaging with gadolinium enhancement of vascularity was used to detect tumor margins, tumor size, and growth. These experiments, using 3D whole brain images obtained in four mice at once, demonstrate the feasibility of obtaining repeat radiological images in intracranial tumor models and suggest that MRI should be incorporated as a research modality for the investigation of intracranial pathobiology.

  19. Diffraction enhanced imaging of a rat model of gastric acid aspiration pneumonitis.

    Science.gov (United States)

    Connor, Dean M; Zhong, Zhong; Foda, Hussein D; Wiebe, Sheldon; Parham, Christopher A; Dilmanian, F Avraham; Cole, Elodia B; Pisano, Etta D

    2011-12-01

    Diffraction-enhanced imaging (DEI) is a type of phase contrast x-ray imaging that has improved image contrast at a lower dose than conventional radiography for many imaging applications, but no studies have been done to determine if DEI might be useful for diagnosing lung injury. The goals of this study were to determine if DEI could differentiate between healthy and injured lungs for a rat model of gastric aspiration and to compare diffraction-enhanced images with chest radiographs. Radiographs and diffraction-enhanced chest images of adult Sprague Dawley rats were obtained before and 4 hours after the aspiration of 0.4 mL/kg of 0.1 mol/L hydrochloric acid. Lung damage was confirmed with histopathology. The radiographs and diffraction-enhanced peak images revealed regions of atelectasis in the injured rat lung. The diffraction-enhanced peak images revealed the full extent of the lung with improved clarity relative to the chest radiographs, especially in the portion of the lower lobe that extended behind the diaphragm on the anteroposterior projection. For a rat model of gastric acid aspiration, DEI is capable of distinguishing between a healthy and an injured lung and more clearly than radiography reveals the full extent of the lung and the lung damage. Copyright © 2011 AUR. All rights reserved.

  20. Comprehensive model for predicting perceptual image quality of smart mobile devices.

    Science.gov (United States)

    Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng

    2015-01-01

    An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.

  1. William, a voxel model of child anatomy from tomographic images for Monte Carlo dosimetry calculations

    International Nuclear Information System (INIS)

    Caon, M.

    2010-01-01

    Full text: Medical imaging provides two-dimensional pictures of the human internal anatomy from which may be constructed a three-dimensional model of organs and tissues suitable for calculation of dose from radiation. Diagnostic CT provides the greatest exposure to radiation per examination and the frequency of CT examination is high. Esti mates of dose from diagnostic radiography are still determined from data derived from geometric models (rather than anatomical models), models scaled from adult bodies (rather than bodies of children) and CT scanner hardware that is no longer used. The aim of anatomical modelling is to produce a mathematical representation of internal anatomy that has organs of realistic size, shape and positioning. The organs and tissues are represented by a great many cuboidal volumes (voxels). The conversion of medical images to voxels is called segmentation and on completion every pixel in an image is assigned to a tissue or organ. Segmentation is time consuming. An image processing pack age is used to identify organ boundaries in each image. Thirty to forty tomographic voxel models of anatomy have been reported in the literature. Each model is of an individual, or a composite from several individuals. Images of children are particularly scarce. So there remains a need for more paediatric anatomical models. I am working on segmenting ''William'' who is 368 PET-CT images from head to toe of a seven year old boy. William will be used for Monte Carlo dose calculations of dose from CT examination using a simulated modern CT scanner.

  2. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    Science.gov (United States)

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  4. Edge Detection on Images of Pseudoimpedance Section Supported by Context and Adaptive Transformation Model Images

    Directory of Open Access Journals (Sweden)

    Kawalec-Latała Ewa

    2014-03-01

    Full Text Available Most of underground hydrocarbon storage are located in depleted natural gas reservoirs. Seismic survey is the most economical source of detailed subsurface information. The inversion of seismic section for obtaining pseudoacoustic impedance section gives the possibility to extract detailed subsurface information. The seismic wavelet parameters and noise briefly influence the resolution. Low signal parameters, especially long signal duration time and the presence of noise decrease pseudoimpedance resolution. Drawing out from measurement or modelled seismic data approximation of distribution of acoustic pseuoimpedance leads us to visualisation and images useful to stratum homogeneity identification goal. In this paper, the improvement of geologic section image resolution by use of minimum entropy deconvolution method before inversion is applied. The author proposes context and adaptive transformation of images and edge detection methods as a way to increase the effectiveness of correct interpretation of simulated images. In the paper, the edge detection algorithms using Sobel, Prewitt, Robert, Canny operators as well as Laplacian of Gaussian method are emphasised. Wiener filtering of image transformation improving rock section structure interpretation pseudoimpedance matrix on proper acoustic pseudoimpedance value, corresponding to selected geologic stratum. The goal of the study is to develop applications of image transformation tools to inhomogeneity detection in salt deposits.

  5. Sediment plume model-a comparison between use of measured turbidity data and satellite images for model calibration.

    Science.gov (United States)

    Sadeghian, Amir; Hudson, Jeff; Wheater, Howard; Lindenschmidt, Karl-Erich

    2017-08-01

    In this study, we built a two-dimensional sediment transport model of Lake Diefenbaker, Saskatchewan, Canada. It was calibrated by using measured turbidity data from stations along the reservoir and satellite images based on a flood event in 2013. In June 2013, there was heavy rainfall for two consecutive days on the frozen and snow-covered ground in the higher elevations of western Alberta, Canada. The runoff from the rainfall and the melted snow caused one of the largest recorded inflows to the headwaters of the South Saskatchewan River and Lake Diefenbaker downstream. An estimated discharge peak of over 5200 m 3 /s arrived at the reservoir inlet with a thick sediment front within a few days. The sediment plume moved quickly through the entire reservoir and remained visible from satellite images for over 2 weeks along most of the reservoir, leading to concerns regarding water quality. The aims of this study are to compare, quantitatively and qualitatively, the efficacy of using turbidity data and satellite images for sediment transport model calibration and to determine how accurately a sediment transport model can simulate sediment transport based on each of them. Both turbidity data and satellite images were very useful for calibrating the sediment transport model quantitatively and qualitatively. Model predictions and turbidity measurements show that the flood water and suspended sediments entered upstream fairly well mixed and moved downstream as overflow with a sharp gradient at the plume front. The model results suggest that the settling and resuspension rates of sediment are directly proportional to flow characteristics and that the use of constant coefficients leads to model underestimation or overestimation unless more data on sediment formation become available. Hence, this study reiterates the significance of the availability of data on sediment distribution and characteristics for building a robust and reliable sediment transport model.

  6. Spatiotemporal processing of gated cardiac SPECT images using deformable mesh modeling

    International Nuclear Information System (INIS)

    Brankov, Jovan G.; Yang Yongyi; Wernick, Miles N.

    2005-01-01

    In this paper we present a spatiotemporal processing approach, based on deformable mesh modeling, for noise reduction in gated cardiac single-photon emission computed tomography images. Because of the partial volume effect (PVE), clinical cardiac-gated perfusion images exhibit a phenomenon known as brightening--the myocardium appears to become brighter as the heart wall thickens. Although brightening is an artifact, it serves as an important diagnostic feature for assessment of wall thickening in clinical practice. Our proposed processing algorithm aims to preserve this important diagnostic feature while reducing the noise level in the images. The proposed algorithm is based on the use of a deformable mesh for modeling the cardiac motion in a gated cardiac sequence, based on which the images are processed by smoothing along space-time trajectories of object points while taking into account the PVE. Our experiments demonstrate that the proposed algorithm can yield significantly more-accurate results than several existing methods

  7. Edge detection of solid motor' CT image based on gravitation model

    International Nuclear Information System (INIS)

    Yu Guanghui; Lu Hongyi; Zhu Min; Liu Xudong; Hou Zhiqiang

    2012-01-01

    In order to detect the edge of solid motor' CT image much better, a new edge detection operator base on gravitation model was put forward. The edge of CT image is got by the new operator. The superiority turned out by comparing the edge got by ordinary operator. The comparison among operators with different size shows that higher quality CT images need smaller size operator while the lower need the larger. (authors)

  8. Phase aided 3D imaging and modeling: dedicated systems and case studies

    Science.gov (United States)

    Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

    2014-05-01

    Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

  9. Skin image illumination modeling and chromophore identification for melanoma diagnosis

    Science.gov (United States)

    Liu, Zhao; Zerubia, Josiane

    2015-05-01

    The presence of illumination variation in dermatological images has a negative impact on the automatic detection and analysis of cutaneous lesions. This paper proposes a new illumination modeling and chromophore identification method to correct lighting variation in skin lesion images, as well as to extract melanin and hemoglobin concentrations of human skin, based on an adaptive bilateral decomposition and a weighted polynomial curve fitting, with the knowledge of a multi-layered skin model. Different from state-of-the-art approaches based on the Lambert law, the proposed method, considering both specular reflection and diffuse reflection of the skin, enables us to address highlight and strong shading effects usually existing in skin color images captured in an uncontrolled environment. The derived melanin and hemoglobin indices, directly relating to the pathological tissue conditions, tend to be less influenced by external imaging factors and are more efficient in describing pigmentation distributions. Experiments show that the proposed method gave better visual results and superior lesion segmentation, when compared to two other illumination correction algorithms, both designed specifically for dermatological images. For computer-aided diagnosis of melanoma, sensitivity achieves 85.52% when using our chromophore descriptors, which is 8~20% higher than those derived from other color descriptors. This demonstrates the benefit of the proposed method for automatic skin disease analysis.

  10. Recent Advances in Translational Magnetic Resonance Imaging in Animal Models of Stress and Depression.

    Science.gov (United States)

    McIntosh, Allison L; Gormley, Shane; Tozzi, Leonardo; Frodl, Thomas; Harkin, Andrew

    2017-01-01

    Magnetic resonance imaging (MRI) is a valuable translational tool that can be used to investigate alterations in brain structure and function in both patients and animal models of disease. Regional changes in brain structure, functional connectivity, and metabolite concentrations have been reported in depressed patients, giving insight into the networks and brain regions involved, however preclinical models are less well characterized. The development of more effective treatments depends upon animal models that best translate to the human condition and animal models may be exploited to assess the molecular and cellular alterations that accompany neuroimaging changes. Recent advances in preclinical imaging have facilitated significant developments within the field, particularly relating to high resolution structural imaging and resting-state functional imaging which are emerging techniques in clinical research. This review aims to bring together the current literature on preclinical neuroimaging in animal models of stress and depression, highlighting promising avenues of research toward understanding the pathological basis of this hugely prevalent disorder.

  11. Recent Advances in Translational Magnetic Resonance Imaging in Animal Models of Stress and Depression

    Directory of Open Access Journals (Sweden)

    Allison L. McIntosh

    2017-05-01

    Full Text Available Magnetic resonance imaging (MRI is a valuable translational tool that can be used to investigate alterations in brain structure and function in both patients and animal models of disease. Regional changes in brain structure, functional connectivity, and metabolite concentrations have been reported in depressed patients, giving insight into the networks and brain regions involved, however preclinical models are less well characterized. The development of more effective treatments depends upon animal models that best translate to the human condition and animal models may be exploited to assess the molecular and cellular alterations that accompany neuroimaging changes. Recent advances in preclinical imaging have facilitated significant developments within the field, particularly relating to high resolution structural imaging and resting-state functional imaging which are emerging techniques in clinical research. This review aims to bring together the current literature on preclinical neuroimaging in animal models of stress and depression, highlighting promising avenues of research toward understanding the pathological basis of this hugely prevalent disorder.

  12. Background Report for the IMAGE 2.0 Energy-Economy Model

    NARCIS (Netherlands)

    Toet AMC; Vries HJM de; Wijngaart RA van den; MTV

    1994-01-01

    Dit rapport geeft achtergrond informatie over de structuur, historische invoergegevens (1970-1990) en calibratie van het Energy-Economy model van IMAGE 2.0. Ook worden de aannames voor het Energy-Economy model beschreven met betrekking tot het Conventional Wisdom scenario. Dit is het basis

  13. Gallbladder Boundary Segmentation from Ultrasound Images Using Active Contour Model

    Science.gov (United States)

    Ciecholewski, Marcin

    Extracting the shape of the gallbladder from an ultrasonography (US) image allows superfluous information which is immaterial in the diagnostic process to be eliminated. In this project an active contour model was used to extract the shape of the gallbladder, both for cases free of lesions, and for those showing specific disease units, namely: lithiasis, polyps and changes in the shape of the organ, such as folds or turns of the gallbladder. The approximate shape of the gallbladder was found by applying the motion equation model. The tests conducted have shown that for the 220 US images of the gallbladder, the area error rate (AER) amounted to 18.15%.

  14. Just Noticeable Distortion Model and Its Application in Color Image Watermarking

    Science.gov (United States)

    Liu, Kuo-Cheng

    In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.

  15. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 and Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States)

    2010-05-15

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  16. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    Science.gov (United States)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  17. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  18. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    Science.gov (United States)

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2017-12-09

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  19. Image contrast enhancement based on a local standard deviation model

    International Nuclear Information System (INIS)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-01-01

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm

  20. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  1. Modeling of skin cancer dermatoscopy images

    Science.gov (United States)

    Iralieva, Malica B.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.

    2018-04-01

    An early identified cancer is more likely to effective respond to treatment and has a less expensive treatment as well. Dermatoscopy is one of general diagnostic techniques for skin cancer early detection that allows us in vivo evaluation of colors and microstructures on skin lesions. Digital phantoms with known properties are required during new instrument developing to compare sample's features with data from the instrument. An algorithm for image modeling of skin cancer is proposed in the paper. Steps of the algorithm include setting shape, texture generation, adding texture and normal skin background setting. The Gaussian represents the shape, and then the texture generation based on a fractal noise algorithm is responsible for spatial chromophores distributions, while the colormap applied to the values corresponds to spectral properties. Finally, a normal skin image simulated by mixed Monte Carlo method using a special online tool is added as a background. Varying of Asymmetry, Borders, Colors and Diameter settings is shown to be fully matched to the ABCD clinical recognition algorithm. The asymmetry is specified by setting different standard deviation values of Gaussian in different parts of image. The noise amplitude is increased to set the irregular borders score. Standard deviation is changed to determine size of the lesion. Colors are set by colormap changing. The algorithm for simulating different structural elements is required to match with others recognition algorithms.

  2. Images created in a model eye during simulated cataract surgery can be the basis for images perceived by patients during cataract surgery

    Science.gov (United States)

    Inoue, M; Uchida, A; Shinoda, K; Taira, Y; Noda, T; Ohnuma, K; Bissen-Miyajima, H; Hirakata, A

    2014-01-01

    Purpose To evaluate the images created in a model eye during simulated cataract surgery. Patients and methods This study was conducted as a laboratory investigation and interventional case series. An artificial opaque lens, a clear intraocular lens (IOL), or an irrigation/aspiration (I/A) tip was inserted into the ‘anterior chamber' of a model eye with the frosted posterior surface corresponding to the retina. Video images were recorded of the posterior surface of the model eye from the rear during simulated cataract surgery. The video clips were shown to 20 patients before cataract surgery, and the similarity of their visual perceptions to these images was evaluated postoperatively. Results The images of the moving lens fragments and I/A tip and the insertion of the IOL were seen from the rear. The image through the opaque lens and the IOL without moving objects was the light of the surgical microscope from the rear. However, when the microscope light was turned off after IOL insertion, the images of the microscope and operating room were observed by the room illumination from the rear. Seventy percent of the patients answered that the visual perceptions of moving lens fragments were similar to the video clips and 55% reported similarity with the IOL insertion. Eighty percent of the patients recommended that patients watch the video clip before their scheduled cataract surgery. Conclusions The patients' visual perceptions during cataract surgery can be reproduced in the model eye. Watching the video images preoperatively may help relax the patients during surgery. PMID:24788007

  3. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Science.gov (United States)

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  4. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2013-07-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.

  5. Correction of electrode modelling errors in multi-frequency EIT imaging.

    Science.gov (United States)

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  6. Computer-aided pulmonary image analysis in small animal models

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J. [Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Bagci, Ulas, E-mail: ulasbagci@gmail.com [Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, Florida 32816 (United States); Kramer-Marek, Gabriela [The Institute of Cancer Research, London SW7 3RP (United Kingdom); Luna, Brian [Microfluidic Laboratory Automation, University of California-Irvine, Irvine, California 92697-2715 (United States); Kubler, Andre [Department of Medicine, Imperial College London, London SW7 2AZ (United Kingdom); Dey, Bappaditya; Jain, Sanjay [Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Foster, Brent [Department of Biomedical Engineering, University of California-Davis, Davis, California 95817 (United States); Papadakis, Georgios Z. [Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Camp, Jeremy V. [Department of Microbiology and Immunology, University of Louisville, Louisville, Kentucky 40202 (United States); Jonsson, Colleen B. [National Institute for Mathematical and Biological Synthesis, University of Tennessee, Knoxville, Tennessee 37996 (United States); Bishai, William R. [Howard Hughes Medical Institute, Chevy Chase, Maryland 20815 and Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Udupa, Jayaram K. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2015-07-15

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.

  7. Ultrasound Imaging and its modeling

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2002-01-01

    Modern medical ultrasound scanners are used for imaging nearly all soft tissue structures in the body. The anatomy can be studied from gray-scale B-mode images, where the reflectivity and scattering strength of the tissues are displayed. The imaging is performed in real time with 20 to 100 images...

  8. [Application of GVF snake model in segmentation of whole body bone SPECT image].

    Science.gov (United States)

    Zhu, Chunmei; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2008-02-01

    Limited by the imaging principle of whole body bone SPECT image, the gray value of bladder area is quite high, which affects the image's brightness, contrast and readability. In the meantime, the similarity between bladder area and focus makes it difficult for some images to be segmented automatically. In this paper, an improved Snake model, GVF Snake, is adopted to automatically segment bladder area, preparing for further processing of whole body bone SPECT images.

  9. Imaging cerebral haemorrhage with magnetic induction tomography: numerical modelling.

    Science.gov (United States)

    Zolgharni, M; Ledger, P D; Armitage, D W; Holder, D S; Griffiths, H

    2009-06-01

    Magnetic induction tomography (MIT) is a new electromagnetic imaging modality which has the potential to image changes in the electrical conductivity of the brain due to different pathologies. In this study the feasibility of detecting haemorrhagic cerebral stroke with a 16-channel MIT system operating at 10 MHz was investigated. The finite-element method combined with a realistic, multi-layer, head model comprising 12 different tissues, was used for the simulations in the commercial FE package, Comsol Multiphysics. The eddy-current problem was solved and the MIT signals computed for strokes of different volumes occurring at different locations in the brain. The results revealed that a large, peripheral stroke (volume 49 cm(3)) produced phase changes that would be detectable with our currently achievable instrumentation phase noise level (17 m degrees ) in 70 (27%) of the 256 exciter/sensor channel combinations. However, reconstructed images showed that a lower noise level than this, of 1 m degrees , was necessary to obtain good visualization of the strokes. The simulated MIT measurements were compared with those from an independent transmission-line-matrix model in order to give confidence in the results.

  10. Uncertainty management in integrated modelling, the IMAGE case

    International Nuclear Information System (INIS)

    Van der Sluijs, J.P.

    1995-01-01

    Integrated assessment models of global environmental problems play an increasingly important role in decision making. This use demands a good insight regarding the reliability of these models. In this paper we analyze uncertainty management in the IMAGE-project (Integrated Model to Assess the Greenhouse Effect). We use a classification scheme comprising type and source of uncertainty. Our analysis shows reliability analysis as main area for improvement. We briefly review a recently developed methodology, NUSAP (Numerical, Unit, Spread, Assessment and Pedigree), that systematically addresses the strength of data in terms of spread, reliability and scientific status (pedigree) of information. This approach is being tested through interviews with model builders. 3 tabs., 20 refs

  11. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Science.gov (United States)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  12. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model.

    Science.gov (United States)

    Bindu, G; Semenov, S

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.

  13. Modeling for the management of peak loads on a radiology image management network

    International Nuclear Information System (INIS)

    Dwyer, S.J.; Cox, G.G.; Templeton, A.W.; Cook, L.T.; Anderson, W.H.; Hensley, K.S.

    1987-01-01

    The design of a radiology image management network for a radiology department can now be assisted by a queueing model. The queueing model requires that the designers specify the following parameters: the number of tasks to be accomplished (acquisition of image data, transmission of data, archiving of data, displaying and manipulation of data, and generation of hard copies); the average times to complete each task; the patient scheduled arrival times; and the number/type of computer nodes interfaced to the network (acquisition nodes, interactive diagnostic display stations, archiving nodes, hard copy nodes, and gateways to hospital systems). The outcomes from the queuering model include mean throughput data rates and identified bottlenecks, and peak throughput data rates and identified bottlenecks. This exhibit presents the queueing model and illustrates its use in managing peak loads on an image management network

  14. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    Jiang, Hao; Yamamoto, Shinji; Imao, Masanao.

    1995-01-01

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  15. Small Animal [18F]FDG PET Imaging for Tumor Model Study

    International Nuclear Information System (INIS)

    Woo, Sang Keun; Kim, Kyeong Min; Cheon, Gi Jeong

    2008-01-01

    PET allows non-invasive, quantitative and repetitive imaging of biological function in living animals. Small animal PET imaging with [ 18 F]FDG has been successfully applied to investigation of metabolism, receptor, ligand interactions, gene expression, adoptive cell therapy and somatic gene therapy. Experimental condition of animal handling impacts on the biodistribution of [ 18 F]FDG in small animal study. The small animal PET and CT images were registered using the hardware fiducial markers and small animal contour point. Tumor imaging in small animal with small animal [ 18 F]FDG PET should be considered fasting, warming, and isoflurane anesthesia level. Registered imaging with small animal PET and CT image could be useful for the detection of tumor. Small animal experimental condition of animal handling and registration method will be of most importance for small lesion detection of metastases tumor model

  16. Development of computational small animal models and their applications in preclinical imaging and therapy research.

    Science.gov (United States)

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  17. Development of computational small animal models and their applications in preclinical imaging and therapy research

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Tianwu [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211 (Switzerland); Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211 (Switzerland); Geneva Neuroscience Center, Geneva University, Geneva CH-1205 (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen 9700 RB (Netherlands)

    2016-01-15

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  18. Development of computational small animal models and their applications in preclinical imaging and therapy research

    International Nuclear Information System (INIS)

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future

  19. Residual stress distribution analysis of heat treated APS TBC using image based modelling.

    Science.gov (United States)

    Li, Chun; Zhang, Xun; Chen, Ying; Carr, James; Jacques, Simon; Behnsen, Julia; di Michiel, Marco; Xiao, Ping; Cernik, Robert

    2017-08-01

    We carried out a residual stress distribution analysis in a APS TBC throughout the depth of the coatings. The samples were heat treated at 1150 °C for 190 h and the data analysis used image based modelling based on the real 3D images measured by Computed Tomography (CT). The stress distribution in several 2D slices from the 3D model is included in this paper as well as the stress distribution along several paths shown on the slices. Our analysis can explain the occurrence of the "jump" features near the interface between the top coat and the bond coat. These features in the residual stress distribution trend were measured (as a function of depth) by high-energy synchrotron XRD (as shown in our related research article entitled 'Understanding the Residual Stress Distribution through the Thickness of Atmosphere Plasma Sprayed (APS) Thermal Barrier Coatings (TBCs) by high energy Synchrotron XRD; Digital Image Correlation (DIC) and Image Based Modelling') (Li et al., 2017) [1].

  20. Image Restoration Based on the Hybrid Total-Variation-Type Model

    OpenAIRE

    Shi, Baoli; Pang, Zhi-Feng; Yang, Yu-Fei

    2012-01-01

    We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two ${L}^{1}$ -norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM) to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method call...

  1. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    Science.gov (United States)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  2. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Science.gov (United States)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and

  3. Creating vascular models by postprocessing computed tomography angiography images: a guide for anatomical education.

    Science.gov (United States)

    Govsa, Figen; Ozer, Mehmet Asim; Sirinturk, Suzan; Eraslan, Cenk; Alagoz, Ahmet Kemal

    2017-08-01

    A new application of teaching anatomy includes the use of computed tomography angiography (CTA) images to create clinically relevant three-dimensional (3D) printed models. The purpose of this article is to review recent innovations on the process and the application of 3D printed models as a tool for using under and post-graduate medical education. Images of aortic arch pattern received by CTA were converted into 3D images using the Google SketchUp free software and were saved in stereolithography format. Using a 3D printer (Makerbot), a model mode polylactic acid material was printed. A two-vessel left aortic arch was identified consisting of the brachiocephalic trunk and left subclavian artery. The life-like 3D models were rotated 360° in all axes in hand. The early adopters in education and clinical practices have embraced the medical imaging-guided 3D printed anatomical models for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between the anatomical structures. Printed vascular models are used to assist in preoperative planning, develop intraoperative guidance tools, and to teach patients surgical trainees in surgical practice.

  4. 3D Modeling of Vascular Pathologies from contrast enhanced magnetic resonance images (MRI)

    International Nuclear Information System (INIS)

    Cantor Rivera, Diego; Orkisz, Maciej; Arias, Julian; Uriza, Luis Felipe

    2007-01-01

    This paper presents a method for generating 3D vascular models from contrast enhanced magnetic resonance images (MRI) using a fast marching algorithm. The main contributions of this work are: the use of the original image for defining a speed function (which determines the movement of the interface) and the calculation of the time in which the interface identifies the artery. The proposed method was validated on pathologic carotid artery images of patients and vascular phantoms. A visual appraisal of vascular models obtained with the method shows a satisfactory extraction of the vascular wall. A quantitative assessment proved that the generated models depend on the values of algorithm parameters. The maximum induced error was equal to 1.34 voxels in the diameter of the measured stenoses.

  5. Energetic neutral atom imaging with the Polar CEPPAD/IPS instrument: Initial forward modeling results

    International Nuclear Information System (INIS)

    Henderson, M.G.; Reeves, G.D.; Moore, K.R.; Spence, H.E.; Jorgensen, A.M.; Roelof, E.C.

    1997-01-01

    Although the primary function of the CEP-PAD/IPS instrument on Polar is the measurement of energetic ions in-situ, it has also proven to be a very capable Energetic neutral Atom (ENA) imager. Raw ENA images are currently being constructed on a routine basis with a temporal resolution of minutes during both active and quiet times. However, while analyses of these images by themselves provide much information on the spatial distribution and dynamics of the energetic ion population in the ring current, detailed modeling is required to extract the actual ion distributions. In this paper, the authors present the initial results of forward modeling an IPS ENA image obtained during a small geo-magnetic storm on June 9, 1997. The equatorial ion distribution inferred with this technique reproduces the expected large noon/midnight and dawn/dusk asymmetries. The limitations of the model are discussed and a number of modifications to the basic forward modeling technique are proposed which should significantly improve its performance in future studies

  6. Using Image Modelling to Teach Newton's Laws with the Ollie Trick

    Science.gov (United States)

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Vianna, Deise Miranda

    2016-01-01

    Image modelling is a video-based teaching tool that is a combination of strobe images and video analysis. This tool can enable a qualitative and a quantitative approach to the teaching of physics, in a much more engaging and appealling way than the traditional expositive practice. In a specific scenario shown in this paper, the Ollie trick, we…

  7. Reconstructing building mass models from UAV images

    KAUST Repository

    Li, Minglei

    2015-07-26

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.

  8. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  9. Noise propagation in resolution modeled PET imaging and its impact on detectability

    International Nuclear Information System (INIS)

    Rahmim, Arman; Tang, Jing

    2013-01-01

    Positron emission tomography imaging is affected by a number of resolution degrading phenomena, including positron range, photon non-collinearity and inter-crystal blurring. An approach to this issue is to model some or all of these effects within the image reconstruction task, referred to as resolution modeling (RM). This approach is commonly observed to yield images of higher resolution and subsequently contrast, and can be thought of as improving the modulation transfer function. Nonetheless, RM can substantially alter the noise distribution. In this work, we utilize noise propagation models in order to accurately characterize the noise texture of reconstructed images in the presence of RM. Furthermore we consider the task of lesion or defect detection, which is highly determined by the noise distribution as quantified using the noise power spectrum. Ultimately, we use this framework to demonstrate why conventional trade-off analyses (e.g. contrast versus noise, using simplistic noise metrics) do not provide a complete picture of the impact of RM and that improved performance of RM according to such analyses does not necessarily translate to the superiority of RM in detection task performance. (paper)

  10. A theory of fine structure image models with an application to detection and classification of dementia.

    Science.gov (United States)

    O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin

    2015-06-01

    Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.

  11. Image-based quantification and mathematical modeling of spatial heterogeneity in ESC colonies.

    Science.gov (United States)

    Herberg, Maria; Zerjatke, Thomas; de Back, Walter; Glauche, Ingmar; Roeder, Ingo

    2015-06-01

    Pluripotent embryonic stem cells (ESCs) have the potential to differentiate into cells of all three germ layers. This unique property has been extensively studied on the intracellular, transcriptional level. However, ESCs typically form clusters of cells with distinct size and shape, and establish spatial structures that are vital for the maintenance of pluripotency. Even though it is recognized that the cells' arrangement and local interactions play a role in fate decision processes, the relations between transcriptional and spatial patterns have not yet been studied. We present a systems biology approach which combines live-cell imaging, quantitative image analysis, and multiscale, mathematical modeling of ESC growth. In particular, we develop quantitative measures of the morphology and of the spatial clustering of ESCs with different expression levels and apply them to images of both in vitro and in silico cultures. Using the same measures, we are able to compare model scenarios with different assumptions on cell-cell adhesions and intercellular feedback mechanisms directly with experimental data. Applying our methodology to microscopy images of cultured ESCs, we demonstrate that the emerging colonies are highly variable regarding both morphological and spatial fluorescence patterns. Moreover, we can show that most ESC colonies contain only one cluster of cells with high self-renewing capacity. These cells are preferentially located in the interior of a colony structure. The integrated approach combining image analysis with mathematical modeling allows us to reveal potential transcription factor related cellular and intercellular mechanisms behind the emergence of observed patterns that cannot be derived from images directly. © 2015 International Society for Advancement of Cytometry.

  12. Sketch-based 3D modeling by aligning outlines of an image

    Directory of Open Access Journals (Sweden)

    Chunxiao Li

    2016-07-01

    Full Text Available In this paper we present an efficient technique for sketch-based 3D modeling using automatically extracted image features. Creating a 3D model often requires a drawing of irregular shapes composed of curved lines as a starting point but it is difficult to hand-draw such lines without introducing awkward bumps and edges along the lines. We propose an automatic alignment of a user׳s hand-drawn sketch lines to the contour lines of an image, facilitating a considerable level of ease with which the user can carelessly continue sketching while the system intelligently snaps the sketch lines to a background image contour, no longer requiring the strenuous effort and stress of trying to make a perfect line during the modeling task. This interactive technique seamlessly combines the efficiency and perception of the human user with the accuracy of computational power, applied to the domain of 3D modeling where the utmost precision of on-screen drawing has been one of the hurdles of the task hitherto considered a job requiring a highly skilled and careful manipulation by the user. We provide several examples to demonstrate the accuracy and efficiency of the method with which complex shapes were achieved easily and quickly in the interactive outline drawing task.

  13. Study of Colour Model for Segmenting Mycobacterium Tuberculosis in Sputum Images

    Science.gov (United States)

    Kurniawardhani, A.; Kurniawan, R.; Muhimmah, I.; Kusumadewi, S.

    2018-03-01

    One of method to diagnose Tuberculosis (TB) disease is sputum test. The presence and number of Mycobacterium tuberculosis (MTB) in sputum are identified. The presence of MTB can be seen under light microscope. Before investigating through stained light microscope, the sputum samples are stained using Ziehl-Neelsen (ZN) stain technique. Because there is no standard procedure in staining, the appearance of sputum samples may vary either in background colour or contrast level. It increases the difficulty in segmentation stage of automatic MTB identification. Thus, this study investigated the colour models to look for colour channels of colour model that can segment MTB well in different stained conditions. The colour models will be investigated are each channel in RGB, HSV, CIELAB, YCbCr, and C-Y colour model and the clustering algorithm used is k-Means. The sputum image dataset used in this study is obtained from community health clinic in a district in Indonesia. The size of each image was set to 1600x1200 pixels which is having variation in number of MTB, background colour, and contrast level. The experiment result indicates that in all image conditions, blue, hue, Cr, and Ry colour channel can be used to segment MTB in one cluster well.

  14. Automating the segmentation of medical images for the production of voxel tomographic computational models

    International Nuclear Information System (INIS)

    Caon, M.

    2001-01-01

    Radiation dosimetry for the diagnostic medical imaging procedures performed on humans requires anatomically accurate, computational models. These may be constructed from medical images as voxel-based tomographic models. However, they are time consuming to produce and as a consequence, there are few available. This paper discusses the emergence of semi-automatic segmentation techniques and describes an application (iRAD) written in Microsoft Visual Basic that allows the bitmap of a medical image to be segmented interactively and semi-automatically while displayed in Microsoft Excel. iRAD will decrease the time required to construct voxel models. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  15. Software engineering methods for the visualization in the modeling of radiation imaging system

    International Nuclear Information System (INIS)

    Tang Jie; Zhang Li; Chen Zhiqiang; Zhao Ziran; XiaoYongshun

    2003-01-01

    This thesis has accomplished the research in visualization in the modeling of radiation imaging system, and a visualize software was developed using OpenGL and Visual C++ tools. It can load any model files, which are made by the user for every component of the radiation image system, and easily manages the module dynamic link library (DLL) designed by the user for possible movements of those components

  16. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Wong, Alexander; Haider, Masoom A.

    2015-01-01

    Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a

  17. Remote sensing image ship target detection method based on visual attention model

    Science.gov (United States)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  18. TU-FG-209-12: Treatment Site and View Recognition in X-Ray Images with Hierarchical Multiclass Recognition Models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, X; Mazur, T; Yang, D [Washington University in St Louis, St Louis, MO (United States)

    2016-06-15

    Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformation and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from View

  19. TU-FG-209-12: Treatment Site and View Recognition in X-Ray Images with Hierarchical Multiclass Recognition Models

    International Nuclear Information System (INIS)

    Chang, X; Mazur, T; Yang, D

    2016-01-01

    Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformation and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from View

  20. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    Science.gov (United States)

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and

  1. RECONSTRUCTION OF A HUMAN LUNG MORPHOLOGY MODEL FROM MAGNETIC RESONANCE IMAGES

    Science.gov (United States)

    RATIONALE A description of lung morphological structure is necessary for modeling the deposition and fate of inhaled therapeutic aerosols. A morphological model of the lung boundary was generated from magnetic resonance (MR) images with the goal of creating a framework for anato...

  2. IMAGE: An Integrated Model for the Assessment of the Greenhouse Effect

    NARCIS (Netherlands)

    Rotmans J; Boois H de; Swart RJ

    1989-01-01

    In dit rapport wordt beschreven hoe het RIVM-simulatiemodel IMAGE (an Integrated Model for the Assessment of the Greenhouse Effect) is opgebouwd. Het model beoogt een geintegreerd overzicht te geven van de broeikasproblematiek alsmede inzicht te verschaffen in de wezenlijke drijfveren van het

  3. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  4. In Vivo PET Imaging of HDL in Multiple Atherosclerosis Models

    DEFF Research Database (Denmark)

    Pérez-Medina, Carlos; Binderup, Tina; Lobatto, Mark E

    2016-01-01

    . Ex vivo validation was conducted by radioactivity counting, autoradiography, and near-infrared fluorescence imaging. Flow cytometric assessment of cellular specificity in different tissues was performed in the murine model. RESULTS: We observed distinct pharmacokinetic profiles for the two (89)Zr......OBJECTIVES: The goal of this study was to develop and validate a noninvasive imaging tool to visualize the in vivo behavior of high-density lipoprotein (HDL) by using positron emission tomography (PET), with an emphasis on its plaque-targeting abilities. BACKGROUND: HDL is a natural nanoparticle......,2-distearoyl-sn-glycero-3-phosphoethanolamine-deferoxamine B). Biodistribution and plaque targeting of radiolabeled HDL were studied in established murine, rabbit, and porcine atherosclerosis models by using PET combined with computed tomography (PET/CT) imaging or PET combined with magnetic resonance imaging...

  5. Spectral imaging toolbox: segmentation, hyperstack reconstruction, and batch processing of spectral images for the determination of cell and model membrane lipid order.

    Science.gov (United States)

    Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor

    2017-05-12

    Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The

  6. Image-based modeling of tumor shrinkage in head and neck radiation therapy1

    Science.gov (United States)

    Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569

  7. Robotic needle steering: design, modeling, planning, and image guidance

    NARCIS (Netherlands)

    Cowan, Noah J.; Goldberg, Ken; Chirikjian, Gregory S.; Fichtinger, Gabor; Alterovitz, Ron; Reed, Kyle B.; Kallem, Vinutha; Misra, Sarthak; Park, Wooram; Okamura, Allison M.; Rosen, Jacob; Hannaford, Blake; Satava, Richard M.

    2010-01-01

    This chapter describes how advances in needle design, modeling, planning, and image guidance make it possible to steer flexible needles from outside the body to reach specified anatomical targets not accessible using traditional needle insertion methods. Steering can be achieved using a variety of

  8. A generalized model for optimal transport of images including dissipation and density modulation

    KAUST Repository

    Maas, Jan

    2015-11-01

    © EDP Sciences, SMAI 2015. In this paper the optimal transport and the metamorphosis perspectives are combined. For a pair of given input images geodesic paths in the space of images are defined as minimizers of a resulting path energy. To this end, the underlying Riemannian metric measures the rate of transport cost and the rate of viscous dissipation. Furthermore, the model is capable to deal with strongly varying image contrast and explicitly allows for sources and sinks in the transport equations which are incorporated in the metric related to the metamorphosis approach by Trouvé and Younes. In the non-viscous case with source term existence of geodesic paths is proven in the space of measures. The proposed model is explored on the range from merely optimal transport to strongly dissipative dynamics. For this model a robust and effective variational time discretization of geodesic paths is proposed. This requires to minimize a discrete path energy consisting of a sum of consecutive image matching functionals. These functionals are defined on corresponding pairs of intensity functions and on associated pairwise matching deformations. Existence of time discrete geodesics is demonstrated. Furthermore, a finite element implementation is proposed and applied to instructive test cases and to real images. In the non-viscous case this is compared to the algorithm proposed by Benamou and Brenier including a discretization of the source term. Finally, the model is generalized to define discrete weighted barycentres with applications to textures and objects.

  9. Adaptive wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image

    International Nuclear Information System (INIS)

    Tabuchi, Motohiro; Yamane, Nobumoto; Morikawa, Yoshitaka

    2008-01-01

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise. (author)

  10. Modeling LCD Displays with Local Backlight Dimming for Image Quality Assessment

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; Forchhammer, Søren

    2011-01-01

    for evaluating the signal quality distortion related directly to digital signal processing, such as compression. However, the physical characteristics of the display device also pose a significant impact on the overall perception. In order to facilitate image quality assessment on modern liquid crystaldisplays...... (LCD) using light emitting diode (LED) backlight with local dimming, we present the essential considerations and guidelines for modeling the characteristics of displays with high dynamic range (HDR) and locally adjustable backlight segments. The representation of the image generated by the model can...... be assessed using the traditional objective metrics, and therefore the proposed approach is useful for assessing the performance of different backlight dimming algorithms in terms of resulting quality and power consumption in a simulated environment. We have implemented the proposed model in C++ and compared...

  11. Computational Modeling for Enhancing Soft Tissue Image Guided Surgery: An Application in Neurosurgery.

    Science.gov (United States)

    Miga, Michael I

    2016-01-01

    With the recent advances in computing, the opportunities to translate computational models to more integrated roles in patient treatment are expanding at an exciting rate. One area of considerable development has been directed towards correcting soft tissue deformation within image guided neurosurgery applications. This review captures the efforts that have been undertaken towards enhancing neuronavigation by the integration of soft tissue biomechanical models, imaging and sensing technologies, and algorithmic developments. In addition, the review speaks to the evolving role of modeling frameworks within surgery and concludes with some future directions beyond neurosurgical applications.

  12. Construction of anthropomorphic hybrid, dual-lattice voxel models for optimizing image quality and dose in radiography

    Science.gov (United States)

    Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph

    2014-03-01

    In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.

  13. Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    Science.gov (United States)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-03-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  14. Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies

    Science.gov (United States)

    Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo

    2017-09-01

    Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.

  15. New Hybrid Variational Recovery Model for Blurred Images with Multiplicative Noise

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Zeng, Tieyong

    2013-01-01

    A new hybrid variational model for recovering blurred images in the presence of multiplicative noise is proposed. Inspired by previous work on multiplicative noise removal, an I-divergence technique is used to build a strictly convex model under a condition that ensures the uniqueness...

  16. A Convex Variational Model for Restoring Blurred Images with Multiplicative Noise

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Tieyong Zeng

    2013-01-01

    In this paper, a new variational model for restoring blurred images with multiplicative noise is proposed. Based on the statistical property of the noise, a quadratic penalty function technique is utilized in order to obtain a strictly convex model under a mild condition, which guarantees...

  17. Image-based modeling of flow and reactive transport in porous media

    Science.gov (United States)

    Qin, Chao-Zhong; Hoang, Tuong; Verhoosel, Clemens V.; Harald van Brummelen, E.; Wijshoff, Herman M. A.

    2017-04-01

    Due to the availability of powerful computational resources and high-resolution acquisition of material structures, image-based modeling has become an important tool in studying pore-scale flow and transport processes in porous media [Scheibe et al., 2015]. It is also playing an important role in the upscaling study for developing macroscale porous media models. Usually, the pore structure of a porous medium is directly discretized by the voxels obtained from visualization techniques (e.g. micro CT scanning), which can avoid the complex generation of computational mesh. However, this discretization may considerably overestimate the interfacial areas between solid walls and pore spaces. As a result, it could impact the numerical predictions of reactive transport and immiscible two-phase flow. In this work, two types of image-based models are used to study single-phase flow and reactive transport in a porous medium of sintered glass beads. One model is from a well-established voxel-based simulation tool. The other is based on the mixed isogeometric finite cell method [Hoang et al., 2016], which has been implemented in the open source Nutils (http://www.nutils.org). The finite cell method can be used in combination with isogeometric analysis to enable the higher-order discretization of problems on complex volumetric domains. A particularly interesting application of this immersed simulation technique is image-based analysis, where the geometry is smoothly approximated by segmentation of a B-spline level set approximation of scan data [Verhoosel et al., 2015]. Through a number of case studies by the two models, we will show the advantages and disadvantages of each model in modeling single-phase flow and reactive transport in porous media. Particularly, we will highlight the importance of preserving high-resolution interfaces between solid walls and pore spaces in image-based modeling of porous media. References Hoang, T., C. V. Verhoosel, F. Auricchio, E. H. van

  18. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Directory of Open Access Journals (Sweden)

    A.-S. Høyer

    2017-12-01

    Full Text Available Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i realistic 3-D training images and (ii an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m  ×  100 m  ×  5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical

  19. WE-G-207-06: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Physical Phantom and Clinical Patient Images

    International Nuclear Information System (INIS)

    Dhou, S; Cai, W; Hurwitz, M; Rottmann, J; Myronakis, M; Cifter, F; Berbeco, R; Lewis, J; Williams, C; Mishra, P; Ionascu, D

    2015-01-01

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential

  20. An Improved Physics-Based Model for Topographic Correction of Landsat TM Images

    Directory of Open Access Journals (Sweden)

    Ainong Li

    2015-05-01

    Full Text Available Optical remotely sensed images in mountainous areas are subject to radiometric distortions induced by topographic effects, which need to be corrected before quantitative applications. Based on Li model and Sandmeier model, this paper proposed an improved physics-based model for the topographic correction of Landsat Thematic Mapper (TM images. The model employed Normalized Difference Vegetation Index (NDVI thresholds to approximately divide land targets into eleven groups, due to NDVI’s lower sensitivity to topography and its significant role in indicating land cover type. Within each group of terrestrial targets, corresponding MODIS BRDF (Bidirectional Reflectance Distribution Function products were used to account for land surface’s BRDF effect, and topographic effects are corrected without Lambertian assumption. The methodology was tested with two TM scenes of severely rugged mountain areas acquired under different sun elevation angles. Results demonstrated that reflectance of sun-averted slopes was evidently enhanced, and the overall quality of images was improved with topographic effect being effectively suppressed. Correlation coefficients between Near Infra-Red band reflectance and illumination condition reduced almost to zero, and coefficients of variance also showed some reduction. By comparison with the other two physics-based models (Sandmeier model and Li model, the proposed model showed favorable results on two tested Landsat scenes. With the almost half-century accumulation of Landsat data and the successive launch and operation of Landsat 8, the improved model in this paper can be potentially helpful for the topographic correction of Landsat and Landsat-like data.

  1. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Science.gov (United States)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  2. "Big Data" in Rheumatology: Intelligent Data Modeling Improves the Quality of Imaging Data.

    Science.gov (United States)

    Landewé, Robert B M; van der Heijde, Désirée

    2018-05-01

    Analysis of imaging data in rheumatology is a challenge. Reliability of scores is an issue for several reasons. Signal-to-noise ratio of most imaging techniques is rather unfavorable (too little signal in relation to too much noise). Optimal use of all available data may help to increase credibility of imaging data, but knowledge of complicated statistical methodology and the help of skilled statisticians are required. Clinicians should appreciate the merits of sophisticated data modeling and liaise with statisticians to increase the quality of imaging results, as proper imaging studies in rheumatology imply more than a supersensitive imaging technique alone. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. BrainK for Structural Image Processing: Creating Electrical Models of the Human Head

    Directory of Open Access Journals (Sweden)

    Kai Li

    2016-01-01

    Full Text Available BrainK is a set of automated procedures for characterizing the tissues of the human head from MRI, CT, and photogrammetry images. The tissue segmentation and cortical surface extraction support the primary goal of modeling the propagation of electrical currents through head tissues with a finite difference model (FDM or finite element model (FEM created from the BrainK geometries. The electrical head model is necessary for accurate source localization of dense array electroencephalographic (dEEG measures from head surface electrodes. It is also necessary for accurate targeting of cerebral structures with transcranial current injection from those surface electrodes. BrainK must achieve five major tasks: image segmentation, registration of the MRI, CT, and sensor photogrammetry images, cortical surface reconstruction, dipole tessellation of the cortical surface, and Talairach transformation. We describe the approach to each task, and we compare the accuracies for the key tasks of tissue segmentation and cortical surface extraction in relation to existing research tools (FreeSurfer, FSL, SPM, and BrainVisa. BrainK achieves good accuracy with minimal or no user intervention, it deals well with poor quality MR images and tissue abnormalities, and it provides improved computational efficiency over existing research packages.

  4. A comparative study of deep learning models for medical image classification

    Science.gov (United States)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are

  5. SU-E-J-234: Application of a Breathing Motion Model to ViewRay Cine MR Images

    International Nuclear Information System (INIS)

    O’Connell, D. P.; Thomas, D. H.; Dou, T. H.; Lamb, J. M.; Yang, L.; Low, D. A.

    2015-01-01

    Purpose: A respiratory motion model previously used to generate breathing-gated CT images was used with cine MR images. Accuracy and predictive ability of the in-plane models were evaluated. Methods: Sagittalplane cine MR images of a patient undergoing treatment on a ViewRay MRI/radiotherapy system were acquired before and during treatment. Images were acquired at 4 frames/second with 3.5 × 3.5 mm resolution and a slice thickness of 5 mm. The first cine frame was deformably registered to following frames. Superior/inferior component of the tumor centroid position was used as a breathing surrogate. Deformation vectors and surrogate measurements were used to determine motion model parameters. Model error was evaluated and subsequent treatment cines were predicted from breathing surrogate data. A simulated CT cine was created by generating breathing-gated volumetric images at 0.25 second intervals along the measured breathing trace, selecting a sagittal slice and downsampling to the resolution of the MR cines. A motion model was built using the first half of the simulated cine data. Model accuracy and error in predicting the remaining frames of the cine were evaluated. Results: Mean difference between model predicted and deformably registered lung tissue positions for the 28 second preview MR cine acquired before treatment was 0.81 +/− 0.30 mm. The model was used to predict two minutes of the subsequent treatment cine with a mean accuracy of 1.59 +/− 0.63 mm. Conclusion: Inplane motion models were built using MR cine images and evaluated for accuracy and ability to predict future respiratory motion from breathing surrogate measurements. Examination of long term predictive ability is ongoing. The technique was applied to simulated CT cines for further validation, and the authors are currently investigating use of in-plane models to update pre-existing volumetric motion models used for generation of breathing-gated CT planning images

  6. Micro-angiography for neuro-vascular imaging. II. Cascade model analysis

    International Nuclear Information System (INIS)

    Ganguly, Arundhuti; Rudin, Stephen; Bednarek, Daniel R.; Hoffmann, Kenneth R.

    2003-01-01

    A micro-angiographic detector was designed and its performance was previously tested to evaluate its feasibility as an improvement over current x-ray detectors for neuro-interventional imaging. The detector was shown to have a modulation transfer function value of about 2% at the Nyquist frequency of 10 cycles/mm and a zero frequency detective quantum efficiency [DQE(0)] value of about 55%. An assessment of the system was required to evaluate whether the current system was performing at its full potential and to determine if any of its components could be optimized to further improve the output. For the purpose, in this study, the parallel cascade theory was used to analyze the performance of the detector under neuro-angiographic conditions by studying the output at the various stages in the imaging chain. A simple model for the spread of light in the CsI(Tl) entrance phosphor was developed and the resolution degradation due to K-fluorescence absorption was calculated. The total gain of the system was found to result in 21 e - (rms) detected at the charge coupled device per absorbed x-ray photon. The gain and the spread of quanta in the imaging chain were used to calculate theoretically the DQE using the parallel cascade model. The results of the model-based calculations matched fairly well with the experimental data previously obtained. This model was then used to optimize the phosphor thickness for the detector. The results showed that the area under the DQE curve had a maximum value at 150 μm of CsI(Tl), though when weighted by the squared signal in frequency space of a 100-μm-diam iodinated vessel, the integral DQE reached a maximum at 250 μm of CsI(Tl). Further, possible locations for gain increase in the imaging chain were determined, and the output of the improved system was simulated. Thus a theoretical analysis for the micro-angiographic detector was performed to better assess its potential

  7. Edge Sharpness Assessment by Parametric Modeling: Application to Magnetic Resonance Imaging.

    Science.gov (United States)

    Ahmad, R; Ding, Y; Simonetti, O P

    2015-05-01

    In biomedical imaging, edge sharpness is an important yet often overlooked image quality metric. In this work, a semi-automatic method to quantify edge sharpness in the presence of significant noise is presented with application to magnetic resonance imaging (MRI). The method is based on parametric modeling of image edges. First, an edge map is automatically generated and one or more edges-of-interest (EOI) are manually selected using graphical user interface. Multiple exclusion criteria are then enforced to eliminate edge pixels that are potentially not suitable for sharpness assessment. Second, at each pixel of the EOI, an image intensity profile is read along a small line segment that runs locally normal to the EOI. Third, the profiles corresponding to all EOI pixels are individually fitted with a sigmoid function characterized by four parameters, including one that represents edge sharpness. Last, the distribution of the sharpness parameter is used to quantify edge sharpness. For validation, the method is applied to simulated data as well as MRI data from both phantom imaging and cine imaging experiments. This method allows for fast, quantitative evaluation of edge sharpness even in images with poor signal-to-noise ratio. Although the utility of this method is demonstrated for MRI, it can be adapted for other medical imaging applications.

  8. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.

    Science.gov (United States)

    Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li

    2017-05-03

    In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.

  9. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    Science.gov (United States)

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.

  10. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Science.gov (United States)

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  11. Quantification of root water uptake in soil using X-ray computed tomography and image-based modelling.

    Science.gov (United States)

    Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina

    2018-01-01

    Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.

  12. The design of a new model circuit for image acquisition from nuclear medicine

    International Nuclear Information System (INIS)

    Zhang Nan; Jin Yongjie

    1995-01-01

    A new practical model of image acquisition circuit is given. It can be applied to data acquisition system of γ camera from nuclear medicine directly. Its idea also can be applied to some image acquisition system of nuclear event

  13. Image-Based Models for Specularity Propagation in Diminished Reality.

    Science.gov (United States)

    Said, Souheil Hadj; Tamaazousti, Mohamed; Bartoli, Adrien

    2018-07-01

    The aim of Diminished Reality (DR) is to remove a target object in a live video stream seamlessly. In our approach, the area of the target object is replaced with new texture that blends with the rest of the image. The result is then propagated to the next frames of the video. One of the important stages of this technique is to update the target region with respect to the illumination change. This is a complex and recurrent problem when the viewpoint changes. We show that the state-of-the-art in DR fails in solving this problem, even under simple scenarios. We then use local illumination models to address this problem. According to these models, the variation in illumination only affects the specular component of the image. In the context of DR, the problem is therefore solved by propagating the specularities in the target area. We list a set of structural properties of specularities which we incorporate in two new models for specularity propagation. Our first model includes the same property as the previous approaches, which is the smoothness of illumination variation, but has a different estimation method based on the Thin-Plate Spline. Our second model incorporates more properties of the specularity's shape on planar surfaces. Experimental results on synthetic and real data show that our strategy substantially improves the rendering quality compared to the state-of-the-art in DR.

  14. Dynamic PET and Optical Imaging and Compartment Modeling using a Dual-labeled Cyclic RGD Peptide Probe

    Directory of Open Access Journals (Sweden)

    Lei Zhu, Ning Guo, Quanzheng Li, Ying Ma, Orit Jacboson, Seulki Lee, Hak Soo Choi, James R. Mansfield, Gang Niu, Xiaoyuan Chen

    2012-01-01

    Full Text Available Purpose: The aim of this study is to determine if dynamic optical imaging could provide comparable kinetic parameters to that of dynamic PET imaging by a near-infrared dye/64Cu dual-labeled cyclic RGD peptide.Methods: The integrin αvβ3 binding RGD peptide was conjugated with a macrocyclic chelator 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA for copper labeling and PET imaging and a near-infrared dye ZW-1 for optical imaging. The in vitro biological activity of RGD-C(DOTA-ZW-1 was characterized by cell staining and receptor binding assay. Sixty-min dynamic PET and optical imaging were acquired on a MDA-MB-435 tumor model. Singular value decomposition (SVD method was applied to compute the dynamic optical signal from the two-dimensional optical projection images. Compartment models were used to quantitatively analyze and compare the dynamic optical and PET data.Results: The dual-labeled probe 64Cu-RGD-C(DOTA-ZW-1 showed integrin specific binding in vitro and in vivo. The binding potential (Bp derived from dynamic optical imaging (1.762 ± 0.020 is comparable to that from dynamic PET (1.752 ± 0.026.Conclusion: The signal un-mixing process using SVD improved the accuracy of kinetic modeling of 2D dynamic optical data. Our results demonstrate that 2D dynamic optical imaging with SVD analysis could achieve comparable quantitative results as dynamic PET imaging in preclinical xenograft models.

  15. Dynamic PET and Optical Imaging and Compartment Modeling using a Dual-labeled Cyclic RGD Peptide Probe.

    Science.gov (United States)

    Zhu, Lei; Guo, Ning; Li, Quanzheng; Ma, Ying; Jacboson, Orit; Lee, Seulki; Choi, Hak Soo; Mansfield, James R; Niu, Gang; Chen, Xiaoyuan

    2012-01-01

    The aim of this study is to determine if dynamic optical imaging could provide comparable kinetic parameters to that of dynamic PET imaging by a near-infrared dye/(64)Cu dual-labeled cyclic RGD peptide. The integrin α(v)β(3) binding RGD peptide was conjugated with a macrocyclic chelator 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) for copper labeling and PET imaging and a near-infrared dye ZW-1 for optical imaging. The in vitro biological activity of RGD-C(DOTA)-ZW-1 was characterized by cell staining and receptor binding assay. Sixty-min dynamic PET and optical imaging were acquired on a MDA-MB-435 tumor model. Singular value decomposition (SVD) method was applied to compute the dynamic optical signal from the two-dimensional optical projection images. Compartment models were used to quantitatively analyze and compare the dynamic optical and PET data. The dual-labeled probe (64)Cu-RGD-C(DOTA)-ZW-1 showed integrin specific binding in vitro and in vivo. The binding potential (Bp) derived from dynamic optical imaging (1.762 ± 0.020) is comparable to that from dynamic PET (1.752 ± 0.026). The signal un-mixing process using SVD improved the accuracy of kinetic modeling of 2D dynamic optical data. Our results demonstrate that 2D dynamic optical imaging with SVD analysis could achieve comparable quantitative results as dynamic PET imaging in preclinical xenograft models.

  16. Modeling an Optical and Infrared Search for Extraterrestrial Intelligence Survey with Exoplanet Direct Imaging

    Science.gov (United States)

    Vides, Christina; Macintosh, Bruce; Ruffio, Jean-Baptiste; Nielsen, Eric; Povich, Matthew Samuel

    2018-01-01

    Gemini Planet Imager (GPI) is a direct high contrast imaging instrument coupled to the Gemini South Telescope. Its purpose is to image extrasolar planets around young (~Intelligence), we modeled GPI’s capabilities to detect an extraterrestrial continuous wave (CW) laser broadcasted within the H-band have been modeled. By using sensitivity evaluated for actual GPI observations of young target stars, we produced models of the CW laser power as a function of distance from the star that could be detected if GPI were to observe nearby (~ 3-5 pc) planet-hosting G-type stars. We took a variety of transmitters into consideration in producing these modeled values. GPI is known to be sensitive to both pulsed and CW coherent electromagnetic radiation. The results were compared to similar studies and it was found that these values are competitive to other optical and infrared observations.

  17. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  18. Numerical modeling of Harmonic Imaging and Pulse Inversion fields

    Science.gov (United States)

    Humphrey, Victor F.; Duncan, Tracy M.; Duck, Francis

    2003-10-01

    Tissue Harmonic Imaging (THI) and Pulse Inversion (PI) Harmonic Imaging exploit the harmonics generated as a result of nonlinear propagation through tissue to improve the performance of imaging systems. A 3D finite difference model, that solves the KZK equation in the frequency domain, is used to investigate the finite amplitude fields produced by rectangular transducers driven with short pulses and their inverses, in water and homogeneous tissue. This enables the characteristic of the fields and the effective PI field to be calculated. The suppression of the fundamental field in PI is monitored, and the suppression of side lobes and a reduction in the effective beamwidth for each field are calculated. In addition, the differences between the pulse and inverse pulse spectra resulting from the use of very short pulses are noted, and the differences in the location of the fundamental and second harmonic spectral peaks observed.

  19. Multimodality pH imaging in a mouse dorsal skin fold window chamber model

    Science.gov (United States)

    Leung, Hui Min; Schafer, Rachel; Pagel, Mark M.; Robey, Ian F.; Gmitro, Arthur F.

    2013-03-01

    Upregulate levels of expression and activity of membrane H+ ion pumps in cancer cells drives the extracellular pH (pHe,) to values lower than normal. Furthermore, disregulated pH is indicative of the changes in glycolytic metabolism in tumor cells and has been shown to facilitate extracellular tissue remodeling during metastasis Therefore, measurement of pHe could be a useful cancer biomarker for diagnostic and therapy monitoring evaluation. Multimodality in-vivo imaging of pHe in tumorous tissue in a mouse dorsal skin fold window chamber (DSFWC) model is described. A custom-made plastic window chamber structure was developed that is compatible with both imaging optical and MR imaging modalities and provides a model system for continuous study of the same tissue microenvironment on multiple imaging platforms over a 3-week period. For optical imaging of pHe, SNARF-1 carboxylic acid is injected intravenously into a SCID mouse with an implanted tumor. A ratiometric measurement of the fluorescence signal captured on a confocal microscope reveals the pHe of the tissue visible within the window chamber. This imaging method was used in a preliminary study to evaluate sodium bicarbonate as a potential drug treatment to reverse tissue acidosis. For MR imaging of pHe the chemical exchange saturation transfer (CEST) was used as an alternative way of measuring pHe in a DSFWC model. ULTRAVIST®, a FDA approved x-ray/CT contrast agent has been shown to have a CEST effect that is pH dependent. A ratiometric analysis of water saturation at 5.6 and 4.2 ppm chemical shift provides a means to estimate the local pHe.

  20. Analyzer-based imaging of spinal fusion in an animal model

    International Nuclear Information System (INIS)

    Kelly, M E; Beavis, R C; Allen, L A; Fiorella, David; Schueltke, E; Juurlink, B H; Chapman, L D; Zhong, Z

    2008-01-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs

  1. Analyzer-based imaging of spinal fusion in an animal model

    Science.gov (United States)

    Kelly, M. E.; Beavis, R. C.; Fiorella, David; Schültke, E.; Allen, L. A.; Juurlink, B. H.; Zhong, Z.; Chapman, L. D.

    2008-05-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs.

  2. Elastic models application for thorax image registration

    International Nuclear Information System (INIS)

    Correa Prado, Lorena S; Diaz, E Andres Valdez; Romo, Raul

    2007-01-01

    This work consist of the implementation and evaluation of elastic alignment algorithms of biomedical images, which were taken at thorax level and simulated with the 4D NCAT digital phantom. Radial Basis Functions spatial transformations (RBF), a kind of spline, which allows carrying out not only global rigid deformations but also local elastic ones were applied, using a point-matching method. The applied functions were: Thin Plate Spline (TPS), Multiquadric (MQ) Gaussian and B-Spline, which were evaluated and compared by means of calculating the Target Registration Error and similarity measures between the registered images (the squared sum of intensity differences (SSD) and correlation coefficient (CC)). In order to value the user incurred error in the point-matching and segmentation tasks, two algorithms were also designed that calculate the Fiduciary Localization Error. TPS and MQ were demonstrated to have better performance than the others. It was proved RBF represent an adequate model for approximating the thorax deformable behaviour. Validation algorithms showed the user error was not significant

  3. Model-based magnetization retrieval from holographic phase images

    Energy Technology Data Exchange (ETDEWEB)

    Röder, Falk, E-mail: f.roeder@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Vogel, Karin [Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Wolf, Daniel [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Hellwig, Olav [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); AG Magnetische Funktionsmaterialien, Institut für Physik, Technische Universität Chemnitz, D-09126 Chemnitz (Germany); HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wee, Sung Hun [HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wicht, Sebastian; Rellinghaus, Bernd [IFW Dresden, Institute for Metallic Materials, P.O. Box 270116, D-01171 Dresden (Germany)

    2017-05-15

    The phase shift of the electron wave is a useful measure for the projected magnetic flux density of magnetic objects at the nanometer scale. More important for materials science, however, is the knowledge about the magnetization in a magnetic nano-structure. As demonstrated here, a dominating presence of stray fields prohibits a direct interpretation of the phase in terms of magnetization modulus and direction. We therefore present a model-based approach for retrieving the magnetization by considering the projected shape of the nano-structure and assuming a homogeneous magnetization therein. We apply this method to FePt nano-islands epitaxially grown on a SrTiO{sub 3} substrate, which indicates an inclination of their magnetization direction relative to the structural easy magnetic [001] axis. By means of this real-world example, we discuss prospects and limits of this approach. - Highlights: • Retrieval of the magnetization from holographic phase images. • Magnetostatic model constructed for a magnetic nano-structure. • Decomposition into homogeneously magnetized components. • Discretization of a each component by elementary cuboids. • Analytic solution for the phase of a magnetized cuboid considered. • Fitting a set of magnetization vectors to experimental phase images.

  4. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    Science.gov (United States)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  5. Modelling the transport of optical photons in scintillation detectors for diagnostic and radiotherapy imaging

    Science.gov (United States)

    Roncali, Emilie; Mosleh-Shirazi, Mohammad Amin; Badano, Aldo

    2017-10-01

    Computational modelling of radiation transport can enhance the understanding of the relative importance of individual processes involved in imaging systems. Modelling is a powerful tool for improving detector designs in ways that are impractical or impossible to achieve through experimental measurements. Modelling of light transport in scintillation detectors used in radiology and radiotherapy imaging that rely on the detection of visible light plays an increasingly important role in detector design. Historically, researchers have invested heavily in modelling the transport of ionizing radiation while light transport is often ignored or coarsely modelled. Due to the complexity of existing light transport simulation tools and the breadth of custom codes developed by users, light transport studies are seldom fully exploited and have not reached their full potential. This topical review aims at providing an overview of the methods employed in freely available and other described optical Monte Carlo packages and analytical models and discussing their respective advantages and limitations. In particular, applications of optical transport modelling in nuclear medicine, diagnostic and radiotherapy imaging are described. A discussion on the evolution of these modelling tools into future developments and applications is presented. The authors declare equal leadership and contribution regarding this review.

  6. Methods for modeling and quantification in functional imaging by positron emissions tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Costes, Nicolas

    2017-01-01

    This report presents experiences and researches in the field of in vivo medical imaging by positron emission tomography (PET) and magnetic resonance imaging (MRI). In particular, advances in terms of reconstruction, quantification and modeling in PET are described. The validation of processing and analysis methods is supported by the creation of data by simulation of the imaging process in PET. The recent advances of combined PET/MRI clinical cameras, allowing simultaneous acquisition of molecular/metabolic PET information, and functional/structural MRI information opens the door to unique methodological innovations, exploiting spatial alignment and simultaneity of the PET and MRI signals. It will lead to an increase in accuracy and sensitivity in the measurement of biological phenomena. In this context, the developed projects address new methodological issues related to quantification, and to the respective contributions of MRI or PET information for a reciprocal improvement of the signals of the two modalities. They open perspectives for combined analysis of the two imaging techniques, allowing optimal use of synchronous, anatomical, molecular and functional information for brain imaging. These innovative concepts, as well as data correction and analysis methods, will be easily translated into other areas of investigation using combined PET/MRI. (author) [fr

  7. Comparison of Color Model in Cotton Image Under Conditions of Natural Light

    Science.gov (United States)

    Zhang, J. H.; Kong, F. T.; Wu, J. Z.; Wang, S. W.; Liu, J. J.; Zhao, P.

    Although the color images contain a large amount of information reflecting the species characteristics, different color models also get different information. The selection of color models is the key to separating crops from background effectively and rapidly. Taking the cotton images collected under natural light as the object, we convert the color components of RGB color model, HSL color model and YIQ color model respectively. Then, we use subjective evaluation and objective evaluation methods, evaluating the 9 color components of conversion. It is concluded that the Q component of the soil, straw and plastic film region gray values remain the same without larger fluctuation when using subjective evaluation method. In the objective evaluation, we use the variance method, average gradient method, gray prediction objective evaluation error statistics method and information entropy method respectively to find the minimum numerical of Q color component suitable for background segmentation.

  8. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  9. Computational model of lightness perception in high dynamic range imaging

    Science.gov (United States)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  10. In Vivo Bioluminescence Imaging for Longitudinal Monitoring of Inflammation in Animal Models of Uveitis.

    Science.gov (United States)

    Gutowski, Michal B; Wilson, Leslie; Van Gelder, Russell N; Pepple, Kathryn L

    2017-03-01

    We develop a quantitative bioluminescence assay for in vivo longitudinal monitoring of inflammation in animal models of uveitis. Three models of experimental uveitis were induced in C57BL/6 albino mice: primed mycobacterial uveitis (PMU), endotoxin-induced uveitis (EIU), and experimental autoimmune uveitis (EAU). Intraperitoneal injection of luminol sodium salt, which emits light when oxidized, provided the bioluminescence substrate. Bioluminescence images were captured by a PerkinElmer In Vivo Imaging System (IVIS) Spectrum and total bioluminescence was analyzed using Living Image software. Bioluminescence on day zero was compared to bioluminescence on the day of peak inflammation for each model. Longitudinal bioluminescence imaging was performed in EIU and EAU. In the presence of luminol, intraocular inflammation generates detectable bioluminescence in three mouse models of uveitis. Peak bioluminescence in inflamed PMU eyes (1.46 × 105 photons/second [p/s]) was significantly increased over baseline (1.47 × 104 p/s, P = 0.01). Peak bioluminescence in inflamed EIU eyes (3.18 × 104 p/s) also was significantly increased over baseline (1.09 × 104 p/s, P = 0.04), and returned to near baseline levels by 48 hours. In EAU, there was a nonsignificant increase in bioluminescence at peak inflammation. In vivo bioluminescence may be used as a noninvasive, quantitative measure of intraocular inflammation in animal models of uveitis. Primed mycobacterial uveitis and EIU are both acute models with robust anterior inflammation and demonstrated significant changes in bioluminescence corresponding with peak inflammation. Experimental autoimmune uveitis is a more indolent posterior uveitis and generated a more modest bioluminescent signal. In vivo imaging system bioluminescence is a nonlethal, quantifiable assay that can be used for monitoring inflammation in animal models of uveitis.

  11. Multiscale vision model for event detection and reconstruction in two-photon imaging data

    DEFF Research Database (Denmark)

    Brazhe, Alexey; Mathiesen, Claus; Lind, Barbara Lykke

    2014-01-01

    on a modified multiscale vision model, an object detection framework based on the thresholding of wavelet coefficients and hierarchical trees of significant coefficients followed by nonlinear iterative partial object reconstruction, for the analysis of two-photon calcium imaging data. The framework is discussed...... of the multiscale vision model is similar in the denoising, but provides a better segmenation of the image into meaningful objects, whereas other methods need to be combined with dedicated thresholding and segmentation utilities....

  12. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  13. Fusing range and intensity images for generating dense models of three-dimensional environments

    DEFF Research Database (Denmark)

    Ellekilde, Lars-Peter; Miró, Jaime Valls; Dissanayake., Gamini

    This paper presents a novel strategy for the construction of dense three-dimensional environment models by combining images from a conventional camera and a range imager. Ro- bust data association is ?rst accomplished by exploiting the Scale Invariant Feature Transformation (SIFT) technique...

  14. Brain MR image segmentation based on an improved active contour model.

    Directory of Open Access Journals (Sweden)

    Xiangrui Meng

    Full Text Available It is often a difficult task to accurately segment brain magnetic resonance (MR images with intensity in-homogeneity and noise. This paper introduces a novel level set method for simultaneous brain MR image segmentation and intensity inhomogeneity correction. To reduce the effect of noise, novel anisotropic spatial information, which can preserve more details of edges and corners, is proposed by incorporating the inner relationships among the neighbor pixels. Then the proposed energy function uses the multivariate Student's t-distribution to fit the distribution of the intensities of each tissue. Furthermore, the proposed model utilizes Hidden Markov random fields to model the spatial correlation between neigh-boring pixels/voxels. The means of the multivariate Student's t-distribution can be adaptively estimated by multiplying a bias field to reduce the effect of intensity inhomogeneity. In the end, we reconstructed the energy function to be convex and calculated it by using the Split Bregman method, which allows our framework for random initialization, thereby allowing fully automated applications. Our method can obtain the final result in less than 1 second for 2D image with size 256 × 256 and less than 300 seconds for 3D image with size 256 × 256 × 171. The proposed method was compared to other state-of-the-art segmentation methods using both synthetic and clinical brain MR images and increased the accuracies of the results more than 3%.

  15. Finding regions of interest in pathological images: an attentional model approach

    Science.gov (United States)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  16. Validation of an imageable surgical resection animal model of Glioblastoma (GBM).

    Science.gov (United States)

    Sweeney, Kieron J; Jarzabek, Monika A; Dicker, Patrick; O'Brien, Donncha F; Callanan, John J; Byrne, Annette T; Prehn, Jochen H M

    2014-08-15

    Glioblastoma (GBM) is the most common and malignant primary brain tumour having a median survival of just 12-18 months following standard therapy protocols. Local recurrence, post-resection and adjuvant therapy occurs in most cases. U87MG-luc2-bearing GBM xenografts underwent 4.5mm craniectomy and tumour resection using microsurgical techniques. The cranial defect was repaired using a novel modified cranial window technique consisting of a circular microscope coverslip held in place with glue. Immediate post-operative bioluminescence imaging (BLI) revealed a gross total resection rate of 75%. At censor point 4 weeks post-resection, Kaplan-Meier survival analysis revealed 100% survival in the surgical group compared to 0% in the non-surgical cohort (p=0.01). No neurological defects or infections in the surgical group were observed. GBM recurrence was reliably imaged using facile non-invasive optical bioluminescence (BLI) imaging with recurrence observed at week 4. For the first time, we have used a novel cranial defect repair method to extend and improve intracranial surgical resection methods for application in translational GBM rodent disease models. Combining BLI and the cranial window technique described herein facilitates non-invasive serial imaging follow-up. Within the current context we have developed a robust methodology for establishing a clinically relevant imageable GBM surgical resection model that appropriately mimics GBM recurrence post resection in patients. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. In vivo 3-dimensional photoacoustic imaging of the renal vasculature in preclinical rodent models

    OpenAIRE

    Ogunlade, O.; Connell, J. J.; Huang, J. L.; Zhang, E.; Lythgoe, M. F.; Long, D. A.; Beard, P.

    2017-01-01

    Non-invasive imaging of the kidney vasculature in preclinical murine models is important for studying renal development, diseases and evaluating new therapies, but is challenging to achieve using existing imaging modalities. Photoacoustic imaging is a promising new technique that is particularly well suited to visualising the vasculature and could provide an alternative to existing preclinical imaging methods for studying renal vascular anatomy and function. To investigate this, an all-optica...

  18. An Effective Surface Modeling Method for Car Styling from a Side-View Image

    Institute of Scientific and Technical Information of China (English)

    LIBao-jun; ZHANGXue-fang; LVZhang-quan; QIYi-chao

    2014-01-01

    We introduce an almost-automatic technique for generating 3D car styling surface models based on a single side-view image. Our approach combines the prior knowledge of car styling and deformable curve network model to obtain an automatic modeling process. Firstly, we define the consistent parameterized curve template for 2D and 3D case respectivelyby analyzingthe characteristic lines for car styling. Then, a semi-automatic extraction from a side-view car image is adopted. Thirdly, statistic morphable model of 3D curve network isused to get the initial solution with sparse point constraints.Withonly afew post-processing operations, the optimized curve network models for creating surfaces are obtained. Finally, the styling surfaces are automatically generated using template-based parametric surface modeling method. More than 50 3D curve network models are constructed as the morphable database. We show that this intelligent modeling toolsimplifiesthe exhausted modeling task, and also demonstratemeaningful results of our approach.

  19. A 4D global respiratory motion model of the thorax based on CT images: A proof of concept.

    Science.gov (United States)

    Fayad, Hadi; Gilles, Marlene; Pan, Tinsu; Visvikis, Dimitris

    2018-05-17

    Respiratory motion reduces the sensitivity and specificity of medical images especially in the thoracic and abdominal areas. It may affect applications such as cancer diagnostic imaging and/or radiation therapy (RT). Solutions to this issue include modeling of the respiratory motion in order to optimize both diagnostic and therapeutic protocols. Personalized motion modeling required patient-specific four-dimensional (4D) imaging which in the case of 4D computed tomography (4D CT) acquisition is associated with an increased dose. The goal of this work was to develop a global respiratory motion model capable of relating external patient surface motion to internal structure motion without the need for a patient-specific 4D CT acquisition. The proposed global model is based on principal component analysis and can be adjusted to a given patient anatomy using only one or two static CT images in conjunction with a respiratory synchronized patient external surface motion. It is based on the relation between the internal motion described using deformation fields obtained by registering 4D CT images and patient surface maps obtained either from optical imaging devices or extracted from CT image-based patient skin segmentation. 4D CT images of six patients were used to generate the global motion model which was validated by adapting it on four different patients having skin segmented surfaces and two other patients having time of flight camera acquired surfaces. The reproducibility of the proposed model was also assessed on two patients with two 4D CT series acquired within 2 weeks of each other. Profile comparison shows the efficacy of the global respiratory motion model and an improvement while using two CT images in order to adapt the model. This was confirmed by the correlation coefficient with a mean correlation of 0.9 and 0.95 while using one or two CT images respectively and when comparing acquired to model generated 4D CT images. For the four patients with segmented

  20. TU-G-303-00: Radiomics: Advances in the Use of Quantitative Imaging Used for Predictive Modeling

    International Nuclear Information System (INIS)

    2015-01-01

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  1. TU-G-303-00: Radiomics: Advances in the Use of Quantitative Imaging Used for Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-06-15

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  2. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...

  3. Specification and design of a Therapy Imaging and Model Management System (TIMMS)

    Science.gov (United States)

    Lemke, Heinz U.; Berliner, Leonard

    2007-03-01

    Appropriate use of Information and Communication Technology (ICT) and Mechatronic (MT) systems is considered by many experts as a significant contribution to improve workflow and quality of care in the Operating Room (OR). This will require a suitable IT infrastructure as well as communication and interface standards, such as DICOM and suitable extensions, to allow data interchange between surgical system components in the OR. A conceptual design of such an infrastructure, i.e. a Therapy Imaging and Model Management System (TIMMS) will be introduced in this paper. A TIMMS should support the essential functions that enable and advance image, and in particular, patient model guided therapy. Within this concept, the image centric world view of the classical PACS technology is complemented by an IT model-centric world view. Such a view is founded in the special modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed. A proper design of a TIMMS, taking into account modern software engineering principles, such as service oriented architecture, will clarify the right position of interfaces and relevant standards for a Surgical Assist System (SAS) in general and their components specifically. Such a system needs to be designed to provide a highly modular structure. Modules may be defined on different granulation levels. A first list of components (e.g. high and low level modules) comprising engines and repositories of an SAS, which should be integrated by a TIMMS, will be introduced in this paper.

  4. Active vision and image/video understanding with decision structures based on the network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  5. Multi-component fiber track modelling of diffusion-weighted magnetic resonance imaging data

    Directory of Open Access Journals (Sweden)

    Yasser M. Kadah

    2010-01-01

    Full Text Available In conventional diffusion tensor imaging (DTI based on magnetic resonance data, each voxel is assumed to contain a single component having diffusion properties that can be fully represented by a single tensor. Even though this assumption can be valid in some cases, the general case involves the mixing of components, resulting in significant deviation from the single tensor model. Hence, a strategy that allows the decomposition of data based on a mixture model has the potential of enhancing the diagnostic value of DTI. This project aims to work towards the development and experimental verification of a robust method for solving the problem of multi-component modelling of diffusion tensor imaging data. The new method demonstrates significant error reduction from the single-component model while maintaining practicality for clinical applications, obtaining more accurate Fiber tracking results.

  6. Iterative model reconstruction: Improved image quality of low-tube-voltage prospective ECG-gated coronary CT angiography images at 256-slice CT

    Energy Technology Data Exchange (ETDEWEB)

    Oda, Seitaro, E-mail: seisei0430@nifty.com [Department of Cardiology, MedStar Washington Hospital Center, 110 Irving Street, NW, Washington, DC 20010 (United States); Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjyo, Chuo-ku, Kumamoto, 860-8556 (Japan); Weissman, Gaby, E-mail: Gaby.Weissman@medstar.net [Department of Cardiology, MedStar Washington Hospital Center, 110 Irving Street, NW, Washington, DC 20010 (United States); Vembar, Mani, E-mail: mani.vembar@philips.com [CT Clinical Science, Philips Healthcare, c595 Miner Road, Cleveland, OH 44143 (United States); Weigold, Wm. Guy, E-mail: Guy.Weigold@MedStar.net [Department of Cardiology, MedStar Washington Hospital Center, 110 Irving Street, NW, Washington, DC 20010 (United States)

    2014-08-15

    Objectives: To investigate the effects of a new model-based type of iterative reconstruction (M-IR) technique, the iterative model reconstruction, on image quality of prospectively gated coronary CT angiography (CTA) acquired at low-tube-voltage. Methods: Thirty patients (16 men, 14 women; mean age 52.2 ± 13.2 years) underwent coronary CTA at 100-kVp on a 256-slice CT. Paired image sets were created using 3 types of reconstruction, i.e. filtered back projection (FBP), a hybrid type of iterative reconstruction (H-IR), and M-IR. Quantitative parameters including CT-attenuation, image noise, and contrast-to-noise ratio (CNR) were measured. The visual image quality, i.e. graininess, beam-hardening, vessel sharpness, and overall image quality, was scored on a 5-point scale. Lastly, coronary artery segments were evaluated using a 4-point scale to investigate the assessability of each segment. Results: There was no significant difference in coronary arterial CT attenuation among the 3 reconstruction methods. The mean image noise of FBP, H-IR, and M-IR images was 29.3 ± 9.6, 19.3 ± 6.9, and 12.9 ± 3.3 HU, respectively, there were significant differences for all comparison combinations among the 3 methods (p < 0.01). The CNR of M-IR was significantly better than of FBP and H-IR images (13.5 ± 5.0 [FBP], 20.9 ± 8.9 [H-IR] and 39.3 ± 13.9 [M-IR]; p < 0.01). The visual scores were significantly higher for M-IR than the other images (p < 0.01), and 95.3% of the coronary segments imaged with M-IR were of assessable quality compared with 76.7% of FBP- and 86.9% of H-IR images. Conclusions: M-IR can provide significantly improved qualitative and quantitative image quality in prospectively gated coronary CTA using a low-tube-voltage.

  7. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    Science.gov (United States)

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  8. WEIBULL MULTIPLICATIVE MODEL AND MACHINE LEARNING MODELS FOR FULL-AUTOMATIC DARK-SPOT DETECTION FROM SAR IMAGES

    Directory of Open Access Journals (Sweden)

    A. Taravat

    2013-09-01

    Full Text Available As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method, synthetic aperture radar (SAR can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks. As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.

  9. Weibull Multiplicative Model and Machine Learning Models for Full-Automatic Dark-Spot Detection from SAR Images

    Science.gov (United States)

    Taravat, A.; Del Frate, F.

    2013-09-01

    As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.

  10. An automatic image-based modelling method applied to forensic infography.

    Directory of Open Access Journals (Sweden)

    Sandra Zancajo-Blazquez

    Full Text Available This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet and image (visible, infrared, thermal, etc.; (ii automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  11. Construction of In Vivo Fluorescent Imaging of Echinococcus granulosus in a Mouse Model.

    Science.gov (United States)

    Wang, Sibo; Yang, Tao; Zhang, Xuyong; Xia, Jie; Guo, Jun; Wang, Xiaoyi; Hou, Jixue; Zhang, Hongwei; Chen, Xueling; Wu, Xiangwei

    2016-06-01

    Human hydatid disease (cystic echinococcosis, CE) is a chronic parasitic infection caused by the larval stage of the cestode Echinococcus granulosus. As the disease mainly affects the liver, approximately 70% of all identified CE cases are detected in this organ. Optical molecular imaging (OMI), a noninvasive imaging technique, has never been used in vivo with the specific molecular markers of CE. Thus, we aimed to construct an in vivo fluorescent imaging mouse model of CE to locate and quantify the presence of the parasites within the liver noninvasively. Drug-treated protoscolices were monitored after marking by JC-1 dye in in vitro and in vivo studies. This work describes for the first time the successful construction of an in vivo model of E. granulosus in a small living experimental animal to achieve dynamic monitoring and observation of multiple time points of the infection course. Using this model, we quantified and analyzed labeled protoscolices based on the intensities of their red and green fluorescence. Interestingly, the ratio of red to green fluorescence intensity not only revealed the location of protoscolices but also determined the viability of the parasites in vivo and in vivo tests. The noninvasive imaging model proposed in this work will be further studied for long-term detection and observation and may potentially be widely utilized in susceptibility testing and therapeutic effect evaluation.

  12. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  13. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  14. Modified-BRISQUE as no reference image quality assessment for structural MR images.

    Science.gov (United States)

    Chow, Li Sze; Rajagopal, Heshalini

    2017-11-01

    An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A model of primate visual cortex based on category-specific redundancies in natural images

    Science.gov (United States)

    Malmir, Mohsen; Shiry Ghidary, S.

    2010-12-01

    Neurophysiological and computational studies have proposed that properties of natural images have a prominent role in shaping selectivity of neurons in the visual cortex. An important property of natural images that has been studied extensively is the inherent redundancy in these images. In this paper, the concept of category-specific redundancies is introduced to describe the complex pattern of dependencies between responses of linear filters to natural images. It is proposed that structural similarities between images of different object categories result in dependencies between responses of linear filters in different spatial scales. It is also proposed that the brain gradually removes these dependencies in different areas of the ventral visual hierarchy to provide a more efficient representation of its sensory input. The authors proposed a model to remove these redundancies and trained it with a set of natural images using general learning rules that are developed to remove dependencies between responses of neighbouring neurons. Results of experiments demonstrate the close resemblance of neuronal selectivity between different layers of the model and their corresponding visual areas.

  16. Stigma models: Testing hypotheses of how images of Nevada are acquired and values are attached to them

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins-Smith, H.C. [New Mexico Univ., Albuquerque, NM (United States)

    1994-12-01

    This report analyzes data from surveys on the effects that images associated with nuclear power and waste (i.e., nuclear images) have on people`s preference to vacation in Nevada. The analysis was stimulated by a model of imagery and stigma which assumes that information about a potentially hazardous facility generates signals that elicit negative images about the place in which it is located. Individuals give these images negative values (valences) that lessen their desire to vacation, relocate, or retire in that place. The model has been used to argue that the proposed Yucca Mountain high-level nuclear waste repository could elicit images of nuclear waste that would stigmatize Nevada and thus impose substantial economic losses there. This report proposes a revised model that assumes that the acquisition and valuation of images depend on individuals` ideological and cultural predispositions and that the ways in which new images will affect their preferences and behavior partly depend on these predispositions. The report tests these hypotheses: (1) individuals with distinct cultural and ideological predispositions have different propensities for acquiring nuclear images, (2) these people attach different valences to these images, (3) the variations in these valences are important, and (4) the valences of the different categories of images within an individual`s image sets for a place correlate very well. The analysis largely confirms these hypotheses, indicating that the stigma model should be revised to (1) consider the relevant ideological and cultural predispositions of the people who will potentially acquire and attach value to the image, (2) specify the kinds of images that previously attracted people to the host state, and (3) consider interactions between the old and potential new images of the place. 37 refs., 18 figs., 17 tabs.

  17. Stigma models: Testing hypotheses of how images of Nevada are acquired and values are attached to them

    International Nuclear Information System (INIS)

    Jenkins-Smith, H.C.

    1994-12-01

    This report analyzes data from surveys on the effects that images associated with nuclear power and waste (i.e., nuclear images) have on people's preference to vacation in Nevada. The analysis was stimulated by a model of imagery and stigma which assumes that information about a potentially hazardous facility generates signals that elicit negative images about the place in which it is located. Individuals give these images negative values (valences) that lessen their desire to vacation, relocate, or retire in that place. The model has been used to argue that the proposed Yucca Mountain high-level nuclear waste repository could elicit images of nuclear waste that would stigmatize Nevada and thus impose substantial economic losses there. This report proposes a revised model that assumes that the acquisition and valuation of images depend on individuals' ideological and cultural predispositions and that the ways in which new images will affect their preferences and behavior partly depend on these predispositions. The report tests these hypotheses: (1) individuals with distinct cultural and ideological predispositions have different propensities for acquiring nuclear images, (2) these people attach different valences to these images, (3) the variations in these valences are important, and (4) the valences of the different categories of images within an individual's image sets for a place correlate very well. The analysis largely confirms these hypotheses, indicating that the stigma model should be revised to (1) consider the relevant ideological and cultural predispositions of the people who will potentially acquire and attach value to the image, (2) specify the kinds of images that previously attracted people to the host state, and (3) consider interactions between the old and potential new images of the place. 37 refs., 18 figs., 17 tabs

  18. Constructing a Computer Model of the Human Eye Based on Tissue Slice Images

    OpenAIRE

    Dai, Peishan; Wang, Boliang; Bao, Chunbo; Ju, Ying

    2010-01-01

    Computer simulation of the biomechanical and biological heat transfer in ophthalmology greatly relies on having a reliable computer model of the human eye. This paper proposes a novel method on the construction of a geometric model of the human eye based on tissue slice images. Slice images were obtained from an in vitro Chinese human eye through an embryo specimen processing methods. A level set algorithm was used to extract contour points of eye tissues while a principle component analysi...

  19. Clinical Application of Solid Model Based on Trabecular Tibia Bone CT Images Created by 3D Printer.

    Science.gov (United States)

    Cho, Jaemo; Park, Chan-Soo; Kim, Yeoun-Jae; Kim, Kwang Gi

    2015-07-01

    The aim of this work is to use a 3D solid model to predict the mechanical loads of human bone fracture risk associated with bone disease conditions according to biomechanical engineering parameters. We used special image processing tools for image segmentation and three-dimensional (3D) reconstruction to generate meshes, which are necessary for the production of a solid model with a 3D printer from computed tomography (CT) images of the human tibia's trabecular and cortical bones. We examined the defects of the mechanism for the tibia's trabecular bones. Image processing tools and segmentation techniques were used to analyze bone structures and produce a solid model with a 3D printer. These days, bio-imaging (CT and magnetic resonance imaging) devices are able to display and reconstruct 3D anatomical details, and diagnostics are becoming increasingly vital to the quality of patient treatment planning and clinical treatment. Furthermore, radiographic images are being used to study biomechanical systems with several aims, namely, to describe and simulate the mechanical behavior of certain anatomical systems, to analyze pathological bone conditions, to study tissues structure and properties, and to create a solid model using a 3D printer to support surgical planning and reduce experimental costs. These days, research using image processing tools and segmentation techniques to analyze bone structures to produce a solid model with a 3D printer is rapidly becoming very important.

  20. RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES

    Science.gov (United States)

    Reconstruction of Human Lung Morphology Models from Magnetic Resonance ImagesT. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)

  1. Effects of spatial and spectral frequencies on wide-field functional imaging (wifi) characterization of preclinical breast cancer models

    Science.gov (United States)

    Moy, Austin; Kim, Jae G.; Lee, Eva Y. H. P.; Choi, Bernard

    2010-02-01

    A common strategy to study breast cancer is the use of the preclinical model. These models provide a physiologically relevant and controlled environment in which to study both response to novel treatments and the biology of the cancer. Preclinical models, including the spontaneous tumor model and mammary window chamber model, are very amenable to optical imaging and to this end, we have developed a wide-field functional imaging (WiFI) instrument that is perfectly suited to studying tumor metabolism in preclinical models. WiFI combines two optical imaging modalities, spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI). Our current WiFI imaging protocol consists of multispectral imaging in the near infrared (650-980 nm) spectrum, over a wide (7 cm x 5 cm) field of view. Using SFDI, the spatially-resolved reflectance of sinusoidal patterns projected onto the tissue is assessed, and optical properties of the tissue are determined, which are then used to extract tissue chromophore concentrations in the form of oxy-, deoxy-, and total hemoglobin concentrations, and percentage of lipid and water. In the current study, we employ Monte Carlo simulations of SFDI light propagation in order to characterize the penetration depth of light in both the spontaneous tumor model and mammary window chamber model. Preliminary results suggest that different spatial frequency and wavelength combinations have different penetration depths, suggesting the potential depth sectioning capability of the SFDI component of WiFI.

  2. A Model-Based Approach to Recovering the Structure of a Plant from Images

    KAUST Repository

    Ward, Ben

    2015-03-19

    We present a method for recovering the structure of a plant directly from a small set of widely-spaced images for automated analysis of phenotype. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is composed of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, without manual intervention.

  3. A Model-Based Approach to Recovering the Structure of a Plant from Images

    KAUST Repository

    Ward, Ben; Bastian, John; van den Hengel, Anton; Pooley, Daniel; Bari, Rajendra; Berger, Bettina; Tester, Mark A.

    2015-01-01

    We present a method for recovering the structure of a plant directly from a small set of widely-spaced images for automated analysis of phenotype. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is composed of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, without manual intervention.

  4. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    International Nuclear Information System (INIS)

    Christianson, O; Winslow, J; Samei, E

    2014-01-01

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using optical character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image

  5. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, O; Winslow, J; Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using optical character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image

  6. A Bayesian Spatial Model to Predict Disease Status Using Imaging Data From Various Modalities

    Directory of Open Access Journals (Sweden)

    Wenqiong Xue

    2018-03-01

    Full Text Available Relating disease status to imaging data stands to increase the clinical significance of neuroimaging studies. Many neurological and psychiatric disorders involve complex, systems-level alterations that manifest in functional and structural properties of the brain and possibly other clinical and biologic measures. We propose a Bayesian hierarchical model to predict disease status, which is able to incorporate information from both functional and structural brain imaging scans. We consider a two-stage whole brain parcellation, partitioning the brain into 282 subregions, and our model accounts for correlations between voxels from different brain regions defined by the parcellations. Our approach models the imaging data and uses posterior predictive probabilities to perform prediction. The estimates of our model parameters are based on samples drawn from the joint posterior distribution using Markov Chain Monte Carlo (MCMC methods. We evaluate our method by examining the prediction accuracy rates based on leave-one-out cross validation, and we employ an importance sampling strategy to reduce the computation time. We conduct both whole-brain and voxel-level prediction and identify the brain regions that are highly associated with the disease based on the voxel-level prediction results. We apply our model to multimodal brain imaging data from a study of Parkinson's disease. We achieve extremely high accuracy, in general, and our model identifies key regions contributing to accurate prediction including caudate, putamen, and fusiform gyrus as well as several sensory system regions.

  7. Image Restoration Based on the Hybrid Total-Variation-Type Model

    Directory of Open Access Journals (Sweden)

    Baoli Shi

    2012-01-01

    Full Text Available We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two L1-norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method called the proximal point method (PPM is proposed and the convergence of the proposed method is proved. Some numerical results demonstrate the viability and efficiency of the proposed model and methods.

  8. Modeling and interpretation of images*

    Directory of Open Access Journals (Sweden)

    Min Michiel

    2015-01-01

    Full Text Available Imaging protoplanetary disks is a challenging but rewarding task. It is challenging because of the glare of the central star outshining the weak signal from the disk at shorter wavelengths and because of the limited spatial resolution at longer wavelengths. It is rewarding because it contains a wealth of information on the structure of the disks and can (directly probe things like gaps and spiral structure. Because it is so challenging, telescopes are often pushed to their limitations to get a signal. Proper interpretation of these images therefore requires intimate knowledge of the instrumentation, the detection method, and the image processing steps. In this chapter I will give some examples and stress some issues that are important when interpreting images from protoplanetary disks.

  9. Body image concerns in professional fashion models: are they really an at-risk group?

    Science.gov (United States)

    Swami, Viren; Szmigielska, Emilia

    2013-05-15

    Although professional models are thought to be a high-risk group for body image concerns, only a handful of studies have empirically investigated this possibility. The present study sought to overcome this dearth of information by comparing professional models and a matched sample on key indices of body image and appeared-related concerns. A group of 52 professional fashion models was compared with a matched sample of 51 non-models from London, England, on indices of weight discrepancy, body appreciation, social physique anxiety, body dissatisfaction, drive for thinness, internalization of sociocultural messages about appearance, and dysfunctional investment in appearance. Results indicated that professional models only evidenced significantly higher drive for thinness and dysfunctional investment in appearance than the control group. Greater duration of engagement as a professional model was associated with more positive body appreciation but also greater drive for thinness. These results indicate that models, who are already underweight, have a strong desire to maintain their low body mass or become thinner. Taken together, the present results suggest that interventions aimed at promoting healthy body image among fashion models may require different strategies than those aimed at the general population. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. FDTD Modeling of Nano- and Bio-Photonic Imaging

    DEFF Research Database (Denmark)

    Tanev, Stoyan; Tuchin, Valery; Pond, James

    2010-01-01

    to address newly emerging problems and not so much on its mathematical formulation. We will first discuss the application of a traditional formulation of the FDTD approach to the modeling of sub-wavelength photonics structures. Next, a modified total/scattered field FDTD approach will be applied...... to the modeling of biophotonics applications including Optical Phase Contrast Microscope (OPCM) imaging of cells containing gold nanoparticles (NPs) as well as its potential application as a modality for in vivo flow cytometry configurations.......In this paper we focus on the discussion of two recent unique applications of the Finite-Difference Time-Domain (FDTD) simulation method to the design and modeling of advanced nano- and bio-photonic problems. The approach that is adopted here focuses on the potential of the FDTD methodology...

  11. Improving Sediment Transport Prediction by Assimilating Satellite Images in a Tidal Bay Model of Hong Kong

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2014-03-01

    Full Text Available Numerical models being one of the major tools for sediment dynamic studies in complex coastal waters are now benefitting from remote sensing images that are easily available for model inputs. The present study explored various methods of integrating remote sensing ocean color data into a numerical model to improve sediment transport prediction in a tide-dominated bay in Hong Kong, Deep Bay. Two sea surface sediment datasets delineated from satellite images from the Moderate Resolution Imaging Spectra-radiometer (MODIS were assimilated into a coastal ocean model of the bay for one tidal cycle. It was found that remote sensing sediment information enhanced the sediment transport model ability by validating the model results with in situ measurements. Model results showed that root mean square errors of forecast sediment both at the surface layer and the vertical layers from the model with satellite sediment assimilation are reduced by at least 36% over the model without assimilation.

  12. A Stochastic Polygons Model for Glandular Structures in Colon Histology Images.

    Science.gov (United States)

    Sirinukunwattana, Korsuk; Snead, David R J; Rajpoot, Nasir M

    2015-11-01

    In this paper, we present a stochastic model for glandular structures in histology images of tissue slides stained with Hematoxylin and Eosin, choosing colon tissue as an example. The proposed Random Polygons Model (RPM) treats each glandular structure in an image as a polygon made of a random number of vertices, where the vertices represent approximate locations of epithelial nuclei. We formulate the RPM as a Bayesian inference problem by defining a prior for spatial connectivity and arrangement of neighboring epithelial nuclei and a likelihood for the presence of a glandular structure. The inference is made via a Reversible-Jump Markov chain Monte Carlo simulation. To the best of our knowledge, all existing published algorithms for gland segmentation are designed to mainly work on healthy samples, adenomas, and low grade adenocarcinomas. One of them has been demonstrated to work on intermediate grade adenocarcinomas at its best. Our experimental results show that the RPM yields favorable results, both quantitatively and qualitatively, for extraction of glandular structures in histology images of normal human colon tissues as well as benign and cancerous tissues, excluding undifferentiated carcinomas.

  13. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  14. Ultrasound imaging measurement of submerged topography in the muddy water physical model

    International Nuclear Information System (INIS)

    Xiao, Xiongwu; Guo, Bingxuan; Li, Deren; Zhang, Peng; Zang, Yu-fu; Zou, Xianjian; Liu, Jian-chen

    2015-01-01

    The real-time, accurate measurement of submerged topography is vital for the analysis of riverbed erosion and deposition. This paper describes a novel method of measuring submerged topography in the B-scan image obtained using an ultrasound imaging device. Results show the distribution of gray values in the image has a process of mutation. This mutation process can be used to adaptively track the topographic lines between riverbed and water, based on the continuity of topography in the horizontal direction. The extracted topographic lines, of one pixel width, are processed by a wavelet filtering method. Compared with the actual topography, the measurement accuracy is within 1 mm. It is suitable for the real-time measurement and analysis of all current model topographies with the advantage of good self-adaptation. In particular, it is visible and intuitive for muddy water in the movable-bed model experiment. (paper)

  15. The IMAGE model suite used for the OECD Environmental Outlook to 2050

    Energy Technology Data Exchange (ETDEWEB)

    Kram, T.; Stehfest, E.

    2012-03-15

    In the Environmental Outlook to 2050 from the Organisation for Economic Co-operation and Development (OECD) a number of scenarios and projection are used which are calculated with the IMAGE model suite. This document describes the models and modules used and their interconnections.

  16. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-01-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  17. Beauty is only photoshop deep: legislating models' BMIs and photoshopping images.

    Science.gov (United States)

    Krawitz, Marilyn

    2014-06-01

    Many women struggle with poor body image and eating disorders due, in part, to images of very thin women and photoshopped bodies in the media and advertisements. In 2013, Israel's Act Limiting Weight in the Modelling Industry, 5772-2012, came into effect. Known as the Photoshop Law, it requires all models in Israel who are over 18 years old to have a body mass index of 18.5 or higher. The Israeli government was the first government in the world to legislate on this issue. Australia has a voluntary Code of Conduct that is similar to the Photoshop Law. This article argues that the Australian government should follow Israel's lead and pass a law similar to the Photoshop Law because the Code is not sufficiently binding.

  18. Theoretical performance model for single image depth from defocus.

    Science.gov (United States)

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  19. Contribution to restoration of degraded images by a space-variant system: use of an a priori model of the image

    International Nuclear Information System (INIS)

    Barakat, Valerie

    1998-01-01

    Imaging systems often present shift-variant point spread functions which are usually approximated by shift-invariant ones, in order to simplify the restoration problem. The aim of this thesis is to show that, if this shift-variant degradation is taken into account, it may increase strongly the quality of restoration. The imaging system is a pinhole, used to acquire images of high energy beams. Three restoration methods have been studied and compared: the Tikhonov-Miller regularization, the Markov-fields and the Maximum-Entropy methods. These methods are based on the incorporation of an a priori knowledge into the restoration process, to achieve stability of the solution. An improved restoration method is proposed: this approach is based on the Tikhonov-Miller regularization, combined with an a priori model of the solution. The idea of such a model is to express local characteristics to be reconstructed. The concept of parametric models described by a set of parameters (shape of the object, amplitude values,...) is used. A parametric optimization is used to find the optimal estimation of parameters close to the correct a priori information data of the expected solution. Several criteria have been proposed to measure the restoration quality. (author) [fr

  20. Reconstructed image of human heart for total artificial heart implantation, based on MR image and cast silicone model of heart

    International Nuclear Information System (INIS)

    Komoda, Takashi; Maeta, Hajime; Uyama, Chikao.

    1991-01-01

    Based on transverse (TRN) and LV long axis (LAX) MR images of two cadaver hearts, three-dimensional (3-D) computer models of the connecting interface between remaining heart and total artificial heart, i.e., mitral and tricuspid valvular annuli (MVA and TVA), ascending aorta (Ao) and pulmonary artery (PA), were reconstructed to compare the shape and the size of MVA and those of TVA, the distance between the center of MVA and TVA (D G ), the angle between the plane of MVA and that of TVA (R T ), and the angles of Ao and PA, respectively, to the plane of MVA (R A , R P ), with those obtained in cast silicone models. It was found that based on LAX rather than TRN MR image, MVA and TVA might be more precisely reconstructed. The data obtained in 3-D images of MVA, TVA, Ao and PA based on silicone models of 32 hearts were as follows: D G (cm): 4.17±0.43, R T (degrees): 22.1±11.3, R A (degrees): 54.9±15.3, R P (degrees): 30.8±17.1. (author)

  1. Comparing planar image quality of rotating slat and parallel hole collimation: influence of system modeling

    International Nuclear Information System (INIS)

    Holen, Roel van; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace

    2008-01-01

    The main remaining challenge for a gamma camera is to overcome the existing trade-off between collimator spatial resolution and system sensitivity. This problem, strongly limiting the performance of parallel hole collimated gamma cameras, can be overcome by applying new collimator designs such as rotating slat (RS) collimators which have a much higher photon collection efficiency. The drawback of a RS collimated gamma camera is that, even for obtaining planar images, image reconstruction is needed, resulting in noise accumulation. However, nowadays iterative reconstruction techniques with accurate system modeling can provide better image quality. Because the impact of this modeling on image quality differs from one system to another, an objective assessment of the image quality obtained with a RS collimator is needed in comparison to classical projection images obtained using a parallel hole (PH) collimator. In this paper, a comparative study of image quality, achieved with system modeling, is presented. RS data are reconstructed to planar images using maximum likelihood expectation maximization (MLEM) with an accurate Monte Carlo derived system matrix while PH projections are deconvolved using a Monte Carlo derived point-spread function. Contrast-to-noise characteristics are used to show image quality for cold and hot spots of varying size. Influence of the object size and contrast is investigated using the optimal contrast-to-noise ratio (CNR o ). For a typical phantom setup, results show that cold spot imaging is slightly better for a PH collimator. For hot spot imaging, the CNR o of the RS images is found to increase with increasing lesion diameter and lesion contrast while it decreases when background dimensions become larger. Only for very large background dimensions in combination with low contrast lesions, the use of a PH collimator could be beneficial for hot spot imaging. In all other cases, the RS collimator scores better. Finally, the simulation of a

  2. Modeling laser speckle imaging of perfusion in the skin (Conference Presentation)

    Science.gov (United States)

    Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard

    2016-02-01

    Laser speckle imaging (LSI) enables visualization of relative blood flow and perfusion in the skin. It is frequently applied to monitor treatment of vascular malformations such as port wine stain birthmarks, and measure changes in perfusion due to peripheral vascular disease. We developed a computational Monte Carlo simulation of laser speckle contrast imaging to quantify how tissue optical properties, blood vessel depths and speeds, and tissue perfusion affect speckle contrast values originating from coherent excitation. The simulated tissue geometry consisted of multiple layers to simulate the skin, or incorporated an inclusion such as a vessel or tumor at different depths. Our simulation used a 30x30mm uniform flat light source to optically excite the region of interest in our sample to better mimic wide-field imaging. We used our model to simulate how dynamically scattered photons from a buried blood vessel affect speckle contrast at different lateral distances (0-1mm) away from the vessel, and how these speckle contrast changes vary with depth (0-1mm) and flow speed (0-10mm/s). We applied the model to simulate perfusion in the skin, and observed how different optical properties, such as epidermal melanin concentration (1%-50%) affected speckle contrast. We simulated perfusion during a systolic forearm occlusion and found that contrast decreased by 35% (exposure time = 10ms). Monte Carlo simulations of laser speckle contrast give us a tool to quantify what regions of the skin are probed with laser speckle imaging, and measure how the tissue optical properties and blood flow affect the resulting images.

  3. A biometric authentication model using hand gesture images.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan; Fister, Iztok; Fister, Iztok

    2013-10-30

    A novel hand biometric authentication method based on measurements of the user's stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password 'iloveu' in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, 'i' , 'l' , 'o' , 'v' , 'e' , and 'u'. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy.

  4. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  5. A topo-graph model for indistinct target boundary definition from anatomical images.

    Science.gov (United States)

    Cui, Hui; Wang, Xiuying; Zhou, Jianlong; Gong, Guanzhong; Eberl, Stefan; Yin, Yong; Wang, Lisheng; Feng, Dagan; Fulham, Michael

    2018-06-01

    It can be challenging to delineate the target object in anatomical imaging when the object boundaries are difficult to discern due to the low contrast or overlapping intensity distributions from adjacent tissues. We propose a topo-graph model to address this issue. The first step is to extract a topographic representation that reflects multiple levels of topographic information in an input image. We then define two types of node connections - nesting branches (NBs) and geodesic edges (GEs). NBs connect nodes corresponding to initial topographic regions and GEs link the nodes at a detailed level. The weights for NBs are defined to measure the similarity of regional appearance, and weights for GEs are defined with geodesic and local constraints. NBs contribute to the separation of topographic regions and the GEs assist the delineation of uncertain boundaries. Final segmentation is achieved by calculating the relevance of the unlabeled nodes to the labels by the optimization of a graph-based energy function. We test our model on 47 low contrast CT studies of patients with non-small cell lung cancer (NSCLC), 10 contrast-enhanced CT liver cases and 50 breast and abdominal ultrasound images. The validation criteria are the Dice's similarity coefficient and the Hausdorff distance. Student's t-test show that our model outperformed the graph models with pixel-only, pixel and regional, neighboring and radial connections (p-values <0.05). Our findings show that the topographic representation and topo-graph model provides improved delineation and separation of objects from adjacent tissues compared to the tested models. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Solar resources estimation combining digital terrain models and satellite images techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bosch, J.L.; Batlles, F.J. [Universidad de Almeria, Departamento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria (Spain); Zarzalejo, L.F. [CIEMAT, Departamento de Energia, Madrid (Spain); Lopez, G. [EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva (Spain)

    2010-12-15

    One of the most important steps to make use of any renewable energy is to perform an accurate estimation of the resource that has to be exploited. In the designing process of both active and passive solar energy systems, radiation data is required for the site, with proper spatial resolution. Generally, a radiometric stations network is used in this evaluation, but when they are too dispersed or not available for the study area, satellite images can be utilized as indirect solar radiation measurements. Although satellite images cover wide areas with a good acquisition frequency they usually have a poor spatial resolution limited by the size of the image pixel, and irradiation must be interpolated to evaluate solar irradiation at a sub-pixel scale. When pixels are located in flat and homogeneous areas, correlation of solar irradiation is relatively high, and classic interpolation can provide a good estimation. However, in complex topography zones, data interpolation is not adequate and the use of Digital Terrain Model (DTM) information can be helpful. In this work, daily solar irradiation is estimated for a wide mountainous area using a combination of Meteosat satellite images and a DTM, with the advantage of avoiding the necessity of ground measurements. This methodology utilizes a modified Heliosat-2 model, and applies for all sky conditions; it also introduces a horizon calculation of the DTM points and accounts for the effect of snow covers. Model performance has been evaluated against data measured in 12 radiometric stations, with results in terms of the Root Mean Square Error (RMSE) of 10%, and a Mean Bias Error (MBE) of +2%, both expressed as a percentage of the mean value measured. (author)

  7. Multilayer Markov Random Field models for change detection in optical remote sensing images

    Science.gov (United States)

    Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane

    2015-09-01

    In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.

  8. Model-based restoration using light vein for range-gated imaging systems.

    Science.gov (United States)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen

    2016-09-10

    The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

  9. INTEGRATING SMARTPHONE IMAGES AND AIRBORNE LIDAR DATA FOR COMPLETE URBAN BUILDING MODELLING

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-06-01

    Full Text Available A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.

  10. PACS for surgery and interventional radiology: features of a Therapy Imaging and Model Management System (TIMMS).

    Science.gov (United States)

    Lemke, Heinz U; Berliner, Leonard

    2011-05-01

    Appropriate use of information and communication technology (ICT) and mechatronic (MT) systems is viewed by many experts as a means to improve workflow and quality of care in the operating room (OR). This will require a suitable information technology (IT) infrastructure, as well as communication and interface standards, such as specialized extensions of DICOM, to allow data interchange between surgical system components in the OR. A design of such an infrastructure, sometimes referred to as surgical PACS, but better defined as a Therapy Imaging and Model Management System (TIMMS), will be introduced in this article. A TIMMS should support the essential functions that enable and advance image guided therapy, and in the future, a more comprehensive form of patient-model guided therapy. Within this concept, the "image-centric world view" of the classical PACS technology is complemented by an IT "model-centric world view". Such a view is founded in the special patient modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed. The modelling aspects refer to both patient information and workflow modelling. Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient OR. The DICOM Working Group 24 (WG-24) has been established to develop DICOM objects and services related to image and model guided surgery. To determine these standards, it is important to define step-by-step surgical workflow practices and create interventional workflow models per procedures or per variable cases. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG-24 is to serve

  11. PACS for surgery and interventional radiology: Features of a Therapy Imaging and Model Management System (TIMMS)

    International Nuclear Information System (INIS)

    Lemke, Heinz U.; Berliner, Leonard

    2011-01-01

    Appropriate use of information and communication technology (ICT) and mechatronic (MT) systems is viewed by many experts as a means to improve workflow and quality of care in the operating room (OR). This will require a suitable information technology (IT) infrastructure, as well as communication and interface standards, such as specialized extensions of DICOM, to allow data interchange between surgical system components in the OR. A design of such an infrastructure, sometimes referred to as surgical PACS, but better defined as a Therapy Imaging and Model Management System (TIMMS), will be introduced in this article. A TIMMS should support the essential functions that enable and advance image guided therapy, and in the future, a more comprehensive form of patient-model guided therapy. Within this concept, the 'image-centric world view' of the classical PACS technology is complemented by an IT 'model-centric world view'. Such a view is founded in the special patient modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed. The modelling aspects refer to both patient information and workflow modelling. Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient OR. The DICOM Working Group 24 (WG-24) has been established to develop DICOM objects and services related to image and model guided surgery. To determine these standards, it is important to define step-by-step surgical workflow practices and create interventional workflow models per procedures or per variable cases. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG-24 is to serve

  12. Nephrus: expert system model in intelligent multilayers for evaluation of urinary system based on scintigraphic image analysis

    International Nuclear Information System (INIS)

    Silva, Jorge Wagner Esteves da; Schirru, Roberto; Boasquevisque, Edson Mendes

    1999-01-01

    Renal function can be measured noninvasively with radionuclides in a extremely safe way compared to other diagnosis techniques. Nevertheless, due to the fact that radioactive materials are used in this procedure, it is necessary to maximize its benefits, therefore all efforts are justifiable in the development of data analysis support tools for this diagnosis modality. The objective of this work is to develop a prototype for a system model based on Artificial Intelligence devices able to perform functions related to cintilographic image analysis of the urinary system. Rules used by medical experts in the analysis of images obtained with 99m Tc+DTPA and /or 99m Tc+DMSA were modeled and a Neural Network diagnosis technique was implemented. Special attention was given for designing programs user-interface. Human Factor Engineering techniques were taking in account allowing friendliness and robustness. The image segmentation adopts a model based on Ideal ROIs, which represent the normal anatomic concept for urinary system organs. Results obtained using Artificial Neural Networks for qualitative image analysis and knowledge model constructed show the feasibility of Artificial Neural Networks for qualitative image analysis and knowledge model constructed show feasibility of Artificial Intelligence implementation that uses inherent abilities of each technique in the medical diagnosis image analysis. (author)

  13. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  14. Establishment of imageable model of T-cell lymphoma growing in syngenic mice

    Czech Academy of Sciences Publication Activity Database

    Větvička, David; Hovorka, Ondřej; Kovář, Lubomír; Říhová, Blanka

    2009-01-01

    Roč. 29, č. 11 (2009), s. 4513-4518 ISSN 0250-7005 R&D Projects: GA AV ČR IAA400200702; GA AV ČR KAN200200651; GA ČR GD310/08/H077 Institutional research plan: CEZ:AV0Z50200510 Keywords : Imageable model * EL-4 T- cell lymphoma * whole body imaging Subject RIV: EC - Immunology Impact factor: 1.428, year: 2009

  15. Classification of bones from MR images in torso PET-MR imaging using a statistical shape model

    International Nuclear Information System (INIS)

    Reza Ay, Mohammad; Akbarzadeh, Afshin; Ahmadian, Alireza; Zaidi, Habib

    2014-01-01

    There have been exclusive features for hybrid PET/MRI systems in comparison with its PET/CT counterpart in terms of reduction of radiation exposure, improved soft-tissue contrast and truly simultaneous and multi-parametric imaging capabilities. However, quantitative imaging on PET/MR is challenged by attenuation of annihilation photons through their pathway. The correction for photon attenuation requires the availability of patient-specific attenuation map, which accounts for the spatial distribution of attenuation coefficients of biological tissues. However, the lack of information on electron density in the MR signal poses an inherent difficulty to the derivation of the attenuation map from MR images. In other words, the MR signal correlates with proton densities and tissue relaxation properties, rather than with electron density and, as such, it is not directly related to attenuation coefficients. In order to derive the attenuation map from MR images at 511 keV, various strategies have been proposed and implemented on prototype and commercial PET/MR systems. Segmentation-based methods generate an attenuation map by classification of T1-weighted or high resolution Dixon MR sequences followed by assignment of predefined attenuation coefficients to various tissue types. Intensity-based segmentation approaches fail to include bones in the attenuation map since the segmentation of bones from conventional MR sequences is a difficult task. Most MR-guided attenuation correction techniques ignore bones owing to the inherent difficulties associated with bone segmentation unless specialized MR sequences such as ultra-short echo (UTE) sequence are utilized. In this work, we introduce a new technique based on statistical shape modeling to segment bones and generate a four-class attenuation map. Our segmentation approach requires a torso bone shape model based on principle component analysis (PCA). A CT-based training set including clearly segmented bones of the torso region

  16. OntoVIP: an ontology for the annotation of object models used for medical image simulation.

    Science.gov (United States)

    Gibaud, Bernard; Forestier, Germain; Benoit-Cattin, Hugues; Cervenansky, Frédéric; Clarysse, Patrick; Friboulet, Denis; Gaignard, Alban; Hugonnard, Patrick; Lartizien, Carole; Liebgott, Hervé; Montagnat, Johan; Tabary, Joachim; Glatard, Tristan

    2014-12-01

    This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  18. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    International Nuclear Information System (INIS)

    Chung, Hyekyun; Poulsen, Per Rugaard; Keall, Paul J.; Cho, Seungryong; Cho, Byungchul

    2016-01-01

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  19. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyekyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea and Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 138-736 (Korea, Republic of); Poulsen, Per Rugaard [Department of Oncology, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C (Denmark); Keall, Paul J. [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Cho, Seungryong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141 (Korea, Republic of); Cho, Byungchul, E-mail: cho.byungchul@gmail.com, E-mail: bcho@amc.seoul.kr [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505 (Korea, Republic of)

    2016-08-15

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  20. An image-based skeletal tissue model for the ICRP reference newborn

    Energy Technology Data Exchange (ETDEWEB)

    Pafundi, Deanna; Lee, Choonsik; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Watchman, Christopher; Bourke, Vincent [Department of Radiation Oncology, University of Arizona, Tucson, AZ (United States); Aris, John [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shagina, Natalia [Urals Research Center for Radiation Medicine, Chelyabinsk (Russian Federation); Harrison, John; Fell, Tim [Radiation Protection Division, Health Protection Agency, Chilton (United Kingdom)], E-mail: wbolch@ufl.edu

    2009-07-21

    Hybrid phantoms represent a third generation of computational models of human anatomy needed for dose assessment in both external and internal radiation exposures. Recently, we presented the first whole-body hybrid phantom of the ICRP reference newborn with a skeleton constructed from both non-uniform rational B-spline and polygon-mesh surfaces (Lee et al 2007 Phys. Med. Biol. 52 3309-33). The skeleton in that model included regions of cartilage and fibrous connective tissue, with the remainder given as a homogenous mixture of cortical and trabecular bone, active marrow and miscellaneous skeletal tissues. In the present study, we present a comprehensive skeletal tissue model of the ICRP reference newborn to permit a heterogeneous representation of the skeleton in that hybrid phantom set-both male and female-that explicitly includes a delineation of cortical bone so that marrow shielding effects are correctly modeled for low-energy photons incident upon the newborn skeleton. Data sources for the tissue model were threefold. First, skeletal site-dependent volumes of homogeneous bone were obtained from whole-cadaver CT image analyses. Second, selected newborn bone specimens were acquired at autopsy and subjected to micro-CT image analysis to derive model parameters of the marrow cavity and bone trabecular 3D microarchitecture. Third, data given in ICRP Publications 70 and 89 were selected to match reference values on total skeletal tissue mass. Active marrow distributions were found to be in reasonable agreement with those given previously by the ICRP. However, significant differences were seen in total skeletal and site-specific masses of trabecular and cortical bone between the current and ICRP newborn skeletal tissue models. The latter utilizes an age-independent ratio of 80%/20% cortical and trabecular bone for the reference newborn. In the current study, a ratio closer to 40%/60% is used based upon newborn CT and micro-CT skeletal image analyses. These changes in

  1. Modeling the Process of Color Image Recognition Using ART2 Neural Network

    Directory of Open Access Journals (Sweden)

    Todor Petkov

    2015-09-01

    Full Text Available This paper thoroughly describes the use of unsupervised adaptive resonance theory ART2 neural network for the purposes of image color recognition of x-ray images and images taken by nuclear magnetic resonance. In order to train the network, the pixel values of RGB colors are regarded as learning vectors with three values, one for red, one for green and one for blue were used. At the end the trained network was tested by the values of pictures and determines the design, or how to visualize the converted picture. As a result we had the same pictures with colors according to the network. Here we use the generalized net to prepare a model that describes the process of the color image recognition.

  2. Myocardial imaging with thallium-201: an experimental model for analysis of the true myocardial and background image components

    International Nuclear Information System (INIS)

    Narahara, K.A.; Hamilton, G.W.; Williams, D.L.; Gould, K.L.

    1977-01-01

    The true myocardial and background components of a resting thallium-201 myocardial image were determined in an experimental dog model. True background was determined by imaging after the heart had been removed and replaced with a water-filled balloon of equal size and shape. In all studies, the background estimated from the region surrounding the heart exceeded true background activity. Furthermore, the relationship between true myocardial background and that estimated from the pericardiac region was inconsistent. Background estimates based on the activity surrounding the heart were not accurate predictors of true background activity

  3. SEMI-AUTOMATIC BUILDING MODELS AND FAÇADE TEXTURE MAPPING FROM MOBILE PHONE IMAGES

    Directory of Open Access Journals (Sweden)

    J. Jeong

    2016-06-01

    Full Text Available Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  4. Object recognition in images via a factor graph model

    Science.gov (United States)

    He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu

    2018-04-01

    Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.

  5. New Parametric Imaging Algorithm for Quantification of Binding Parameter in non-reversible compartment model: MLAIR

    International Nuclear Information System (INIS)

    Kim, Su Jin; Lee, Jae Sung; Kim, Yu Kyeong; Lee, Dong Soo

    2007-01-01

    Parametric imaging allows us analysis of the entire brain or body image. Graphical approaches are commonly employed to generate parametric imaging through linear or multilinear regression. However, this linear regression method has limited accuracy due to bias in high level of noise data. Several methods have been proposed to reduce bias for linear regression estimation especially in reversible model. In this study, we focus on generating a net accumulation rate (K i ), which is related to binding parameter in brain receptor study, parametric imaging in an irreversible compartment model using multiple linear analysis. The reliability of a newly developed multiple linear analysis method (MLAIR) was assessed through the Monte Carlo simulation, and we applied it to a [ 11 C]MeNTI PET for opioid receptor

  6. MIDA: A Multimodal Imaging-Based Detailed Anatomical Model of the Human Head and Neck.

    Directory of Open Access Journals (Sweden)

    Maria Ida Iacono

    Full Text Available Computational modeling and simulations are increasingly being used to complement experimental testing for analysis of safety and efficacy of medical devices. Multiple voxel- and surface-based whole- and partial-body models have been proposed in the literature, typically with spatial resolution in the range of 1-2 mm and with 10-50 different tissue types resolved. We have developed a multimodal imaging-based detailed anatomical model of the human head and neck, named "MIDA". The model was obtained by integrating three different magnetic resonance imaging (MRI modalities, the parameters of which were tailored to enhance the signals of specific tissues: i structural T1- and T2-weighted MRIs; a specific heavily T2-weighted MRI slab with high nerve contrast optimized to enhance the structures of the ear and eye; ii magnetic resonance angiography (MRA data to image the vasculature, and iii diffusion tensor imaging (DTI to obtain information on anisotropy and fiber orientation. The unique multimodal high-resolution approach allowed resolving 153 structures, including several distinct muscles, bones and skull layers, arteries and veins, nerves, as well as salivary glands. The model offers also a detailed characterization of eyes, ears, and deep brain structures. A special automatic atlas-based segmentation procedure was adopted to include a detailed map of the nuclei of the thalamus and midbrain into the head model. The suitability of the model to simulations involving different numerical methods, discretization approaches, as well as DTI-based tensorial electrical conductivity, was examined in a case-study, in which the electric field was generated by transcranial alternating current stimulation. The voxel- and the surface-based versions of the models are freely available to the scientific community.

  7. A Mathematical Model for Storage and Recall of Images using Targeted Synchronization of Coupled Maps.

    Science.gov (United States)

    Palaniyandi, P; Rangarajan, Govindan

    2017-08-21

    We propose a mathematical model for storage and recall of images using coupled maps. We start by theoretically investigating targeted synchronization in coupled map systems wherein only a desired (partial) subset of the maps is made to synchronize. A simple method is introduced to specify coupling coefficients such that targeted synchronization is ensured. The principle of this method is extended to storage/recall of images using coupled Rulkov maps. The process of adjusting coupling coefficients between Rulkov maps (often used to model neurons) for the purpose of storing a desired image mimics the process of adjusting synaptic strengths between neurons to store memories. Our method uses both synchronisation and synaptic weight modification, as the human brain is thought to do. The stored image can be recalled by providing an initial random pattern to the dynamical system. The storage and recall of the standard image of Lena is explicitly demonstrated.

  8. Comparison of model and human observer performance for detection and discrimination tasks using dual-energy x-ray images

    International Nuclear Information System (INIS)

    Richard, Samuel; Siewerdsen, Jeffrey H.

    2008-01-01

    Model observer performance, computed theoretically using cascaded systems analysis (CSA), was compared to the performance of human observers in detection and discrimination tasks. Dual-energy (DE) imaging provided a wide range of acquisition and decomposition parameters for which observer performance could be predicted and measured. This work combined previously derived observer models (e.g., Fisher-Hotelling and non-prewhitening) with CSA modeling of the DE image noise-equivalent quanta (NEQ) and imaging task (e.g., sphere detection, shape discrimination, and texture discrimination) to yield theoretical predictions of detectability index (d ' ) and area under the receiver operating characteristic (A Z ). Theoretical predictions were compared to human observer performance assessed using 9-alternative forced-choice tests to yield measurement of A Z as a function of DE image acquisition parameters (viz., allocation of dose between the low- and high-energy images) and decomposition technique [viz., three DE image decomposition algorithms: standard log subtraction (SLS), simple-smoothing of the high-energy image (SSH), and anti-correlated noise reduction (ACNR)]. Results showed good agreement between theory and measurements over a broad range of imaging conditions. The incorporation of an eye filter and internal noise in the observer models demonstrated improved correspondence with human observer performance. Optimal acquisition and decomposition parameters were shown to depend on the imaging task; for example, ACNR and SSH yielded the greatest performance in the detection of soft-tissue and bony lesions, respectively. This study provides encouraging evidence that Fourier-based modeling of NEQ computed via CSA and imaging task provides a good approximation to human observer performance for simple imaging tasks, helping to bridge the gap between Fourier metrics of detector performance (e.g., NEQ) and human observer performance.

  9. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images.

    Science.gov (United States)

    Dabbah, M A; Graham, J; Petropoulos, I; Tavakoli, M; Malik, R A

    2010-01-01

    Corneal Confocal Microscopy (CCM) imaging is a non-invasive surrogate of detecting, quantifying and monitoring diabetic peripheral neuropathy. This paper presents an automated method for detecting nerve-fibres from CCM images using a dual-model detection algorithm and compares the performance to well-established texture and feature detection methods. The algorithm comprises two separate models, one for the background and another for the foreground (nerve-fibres), which work interactively. Our evaluation shows significant improvement (p approximately 0) in both error rate and signal-to-noise ratio of this model over the competitor methods. The automatic method is also evaluated in comparison with manual ground truth analysis in assessing diabetic neuropathy on the basis of nerve-fibre length, and shows a strong correlation (r = 0.92). Both analyses significantly separate diabetic patients from control subjects (p approximately 0).

  10. Image segmentation of overlapping leaves based on Chan–Vese model and Sobel operator

    Directory of Open Access Journals (Sweden)

    Zhibin Wang

    2018-03-01

    Full Text Available To improve the segmentation precision of overlapping crop leaves, this paper presents an effective image segmentation method based on the Chan–Vese model and Sobel operator. The approach consists of three stages. First, a feature that identifies hues with relatively high levels of green is used to extract the region of leaves and remove the background. Second, the Chan–Vese model and improved Sobel operator are implemented to extract the leaf contours and detect the edges, respectively. Third, a target leaf with a complex background and overlapping is extracted by combining the results obtained by the Chan–Vese model and Sobel operator. To verify the effectiveness of the proposed algorithm, a segmentation experiment was performed on 30 images of cucumber leaf. The mean error rate of the proposed method is 0.0428, which is a decrease of 6.54% compared with the mean error rate of the level set method. Experimental results show that the proposed method can accurately extract the target leaf from cucumber leaf images with complex backgrounds and overlapping regions.

  11. Combining computer modelling and cardiac imaging to understand right ventricular pump function.

    Science.gov (United States)

    Walmsley, John; van Everdingen, Wouter; Cramer, Maarten J; Prinzen, Frits W; Delhaas, Tammo; Lumens, Joost

    2017-10-01

    Right ventricular (RV) dysfunction is a strong predictor of outcome in heart failure and is a key determinant of exercise capacity. Despite these crucial findings, the RV remains understudied in the clinical, experimental, and computer modelling literature. This review outlines how recent advances in using computer modelling and cardiac imaging synergistically help to understand RV function in health and disease. We begin by highlighting the complexity of interactions that make modelling the RV both challenging and necessary, and then summarize the multiscale modelling approaches used to date to simulate RV pump function in the context of these interactions. We go on to demonstrate how these modelling approaches in combination with cardiac imaging have improved understanding of RV pump function in pulmonary arterial hypertension, arrhythmogenic right ventricular cardiomyopathy, dyssynchronous heart failure and cardiac resynchronization therapy, hypoplastic left heart syndrome, and repaired tetralogy of Fallot. We conclude with a perspective on key issues to be addressed by computational models of the RV in the near future. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  12. A time dependent zonally averaged energy balance model to be incorporated into IMAGE (Integrated Model to Assess the Greenhouse Effect). Collaborative Paper

    International Nuclear Information System (INIS)

    Jonas, M.; Olendrzynski, K.; Elzen, M. den

    1991-10-01

    The Intergovernmental Panel on Climate Change (IPCC) is placing increasing emphasis on the use of time-dependent impact models that are linked with energy-emission accounting frameworks and models that predict in a time-dependent fashion important variables such as atmospheric concentrations of greenhouse gases, surface temperature and precipitation. Integrating these tools (greenhouse gas emission strategies, atmospheric processes, ecological impacts) into what is called an integrated assessment model will assist policymakers in the IPCC and elsewhere to assess the impacts of a wide variety of emission strategies. The Integrated Model to Assess the Greenhouse Effect (IMAGE; developed at RIVM) represents such an integrated assessment model which already calculates historical and future effects of greenhouse gas emissions on global surface temperature, sea level rise and other ecological and socioeconomic impacts. However, to be linked to environmental impact models such as the Global Vegetation Model and the Timber Assessment Model, both of which are under development at RIVM and IIASA, IMAGE needs to be regionalized in terms of temperature and precipitation output. These key parameters will then enable the above environmental impact models to be run in a time-dependent mode. In this paper we lay the scientific and numerical basis for a two-dimensional Energy Balance Model (EBM) to be integrated into the climate module of IMAGE which will ultimately provide scenarios of surface temperature and precipitation, resolved with respect to latitude and height. This paper will deal specifically with temperature; following papers will deal with precipitation. So far, the relatively simple EBM set up in this paper resolves mean annual surface temperatures on a regional scale defined by 10 deg latitude bands. In addition, we can concentrate on the implementation of the EBM into IMAGE, i.e., on the steering mechanism itself. Both reasons justify the time and effort put into

  13. Overview of IMAGE 2.0. An integrated model of climate change and the global environment

    International Nuclear Information System (INIS)

    Alcamo, J.; Battjes, C.; Van den Born, G.J.; Bouwman, A.F.; De Haan, B.J.; Klein Goldewijk, K.; Klepper, O.; Kreileman, G.J.J.; Krol, M.; Leemans, R.; Van Minnen, J.G.; Olivier, J.G.J.; De Vries, H.J.M.; Toet, A.M.C.; Van den Wijngaart, R.A.; Van der Woerd, H.J.; Zuidema, G.

    1995-01-01

    The IMAGE 2.0 model is a multi-disciplinary, integrated model, designed to simulate the dynamics of the global society-biosphere-climate system. In this paper the focus is on the scientific aspects of the model, while another paper in this volume emphasizes its political aspects. The objectives of IMAGE 2.0 are to investigate linkages and feedbacks in the global system, and to evaluate consequences of climate policies. Dynamic calculations are performed to the year 2100, with a spatial scale ranging from grid (0.5x0.5 latitude-longitude) to world political regions, depending on the sub-model. A total of 13 sub-models make up IMAGE 2.0, and they are organized into three fully linked sub-systems: Energy-Industry, Terrestrial Environment, and Atmosphere-Ocean. The fully linked model has been tested against data from 1970 to 1990, and after calibration it can reproduce the following observed trends: regional energy consumption and energy-related emissions, terrestrial flux of carbon dioxide and emissions of greenhouse gases, concentrations of greenhouse gases in the atmosphere, and transformation of land cover. The model can also simulate current zonal average surface and vertical temperatures. 1 fig., 10 refs

  14. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    International Nuclear Information System (INIS)

    Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2011-01-01

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T 2 -weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  15. A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.

    Science.gov (United States)

    Calapez, Alexandre; Rosa, Agostinho

    2010-09-01

    Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.

  16. In Vivo Imaging of Retinal Hypoxia in a Model of Oxygen-Induced Retinopathy.

    Science.gov (United States)

    Uddin, Md Imam; Evans, Stephanie M; Craft, Jason R; Capozzi, Megan E; McCollum, Gary W; Yang, Rong; Marnett, Lawrence J; Uddin, Md Jashim; Jayagopal, Ashwath; Penn, John S

    2016-08-05

    Ischemia-induced hypoxia elicits retinal neovascularization and is a major component of several blinding retinopathies such as retinopathy of prematurity (ROP), diabetic retinopathy (DR) and retinal vein occlusion (RVO). Currently, noninvasive imaging techniques capable of detecting and monitoring retinal hypoxia in living systems do not exist. Such techniques would greatly clarify the role of hypoxia in experimental and human retinal neovascular pathogenesis. In this study, we developed and characterized HYPOX-4, a fluorescence-imaging probe capable of detecting retinal-hypoxia in living animals. HYPOX-4 dependent in vivo and ex vivo imaging of hypoxia was tested in a mouse model of oxygen-induced retinopathy (OIR). Predicted patterns of retinal hypoxia were imaged by HYPOX-4 dependent fluorescence activity in this animal model. In retinal cells and mouse retinal tissue, pimonidazole-adduct immunostaining confirmed the hypoxia selectivity of HYPOX-4. HYPOX-4 had no effect on retinal cell proliferation as indicated by BrdU assay and exhibited no acute toxicity in retinal tissue as indicated by TUNEL assay and electroretinography (ERG) analysis. Therefore, HYPOX-4 could potentially serve as the basis for in vivo fluorescence-based hypoxia-imaging techniques, providing a tool for investigators to understand the pathogenesis of ischemic retinopathies and for physicians to address unmet clinical needs.

  17. Imaging of Small Animal Peripheral Artery Disease Models: Recent Advancements and Translational Potential

    Directory of Open Access Journals (Sweden)

    Jenny B. Lin

    2015-05-01

    Full Text Available Peripheral artery disease (PAD is a broad disorder encompassing multiple forms of arterial disease outside of the heart. As such, PAD development is a multifactorial process with a variety of manifestations. For example, aneurysms are pathological expansions of an artery that can lead to rupture, while ischemic atherosclerosis reduces blood flow, increasing the risk of claudication, poor wound healing, limb amputation, and stroke. Current PAD treatment is often ineffective or associated with serious risks, largely because these disorders are commonly undiagnosed or misdiagnosed. Active areas of research are focused on detecting and characterizing deleterious arterial changes at early stages using non-invasive imaging strategies, such as ultrasound, as well as emerging technologies like photoacoustic imaging. Earlier disease detection and characterization could improve interventional strategies, leading to better prognosis in PAD patients. While rodents are being used to investigate PAD pathophysiology, imaging of these animal models has been underutilized. This review focuses on structural and molecular information and disease progression revealed by recent imaging efforts of aortic, cerebral, and peripheral vascular disease models in mice, rats, and rabbits. Effective translation to humans involves better understanding of underlying PAD pathophysiology to develop novel therapeutics and apply non-invasive imaging techniques in the clinic.

  18. Efficient scatter model for simulation of ultrasound images from computed tomography data

    Science.gov (United States)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  19. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  20. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model

    Directory of Open Access Journals (Sweden)

    Dan Liu

    2018-04-01

    Full Text Available This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN and a continuous pairwise Conditional Random Field (CRF model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  1. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    Science.gov (United States)

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  2. Optimization of an Image-Guided Laser-Induced Choroidal Neovascularization Model in Mice.

    Directory of Open Access Journals (Sweden)

    Yan Gong

    Full Text Available The mouse model of laser-induced choroidal neovascularization (CNV has been used in studies of the exudative form of age-related macular degeneration using both the conventional slit lamp and a new image-guided laser system. A standardized protocol is needed for consistent results using this model, which has been lacking. We optimized details of laser-induced CNV using the image-guided laser photocoagulation system. Four lesions with similar size were consistently applied per eye at approximately double the disc diameter away from the optic nerve, using different laser power levels, and mice of various ages and genders. After 7 days, the mice were sacrificed and retinal pigment epithelium/choroid/sclera was flat-mounted, stained with Isolectin B4, and imaged. Quantification of the area of the laser-induced lesions was performed using an established and constant threshold. Exclusion criteria are described that were necessary for reliable data analysis of the laser-induced CNV lesions. The CNV lesion area was proportional to the laser power levels. Mice at 12-16 weeks of age developed more severe CNV than those at 6-8 weeks of age, and the gender difference was only significant in mice at 12-16 weeks of age, but not in those at 6-8 weeks of age. Dietary intake of omega-3 long-chain polyunsaturated fatty acid reduced laser-induced CNV in mice. Taken together, laser-induced CNV lesions can be easily and consistently applied using the image-guided laser platform. Mice at 6-8 weeks of age are ideal for the laser-induced CNV model.

  3. Application of Benchtop-magnetic resonance imaging in a nude mouse tumor model

    Directory of Open Access Journals (Sweden)

    Mäder Karsten

    2011-07-01

    Full Text Available Abstract Background MRI plays a key role in the preclinical development of new drugs, diagnostics and their delivery systems. However, very high installation and running costs of existing superconducting MRI machines limit the spread of MRI. The new method of Benchtop-MRI (BT-MRI has the potential to overcome this limitation due to much lower installation and almost no running costs. However, due to the low field strength and decreased magnet homogeneity it is questionable, whether BT-MRI can achieve sufficient image quality to provide useful information for preclinical in vivo studies. It was the aim of the current study to explore the potential of BT-MRI on tumor models in mice. Methods We used a prototype of an in vivo BT-MRI apparatus to visualise organs and tumors and to analyse tumor progression in nude mouse xenograft models of human testicular germ cell tumor and colon carcinoma. Results Subcutaneous xenografts were easily identified as relative hypointense areas in transaxial slices of NMR images. Monitoring of tumor progression evaluated by pixel extension analyses based on NMR images correlated with increasing tumor volume calculated by calliper measurement. Gd-BOPTA contrast agent injection resulted in a better differentiation between parts of the urinary tissues and organs due to fast elimination of the agent via kidneys. In addition, interior structuring of tumors could be observed. A strong contrast enhancement within a tumor was associated with a central necrotic/fibrotic area. Conclusions BT-MRI provides satisfactory image quality to visualize organs and tumors and to monitor tumor progression and structure in mouse models.

  4. Image simulation and a model of noise power spectra across a range of mammographic beam qualities

    Energy Technology Data Exchange (ETDEWEB)

    Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C. [National Coordinating Centre for the Physics of Mammography, Royal Surrey County Hospital, Guildford GU2 7XX, United Kingdom and Department of Physics, University of Surrey, Guildford GU2 7XH (United Kingdom); Diaz, Oliver [Centre for Vision, Speech and Signal Processing, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom and Computer Vision and Robotics Research Institute, University of Girona, Girona 17071 (Spain)

    2014-12-15

    Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a reference beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise

  5. Reconstruction of hyperspectral image using matting model for classification

    Science.gov (United States)

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  6. Development of digital phantoms based on a finite element model to simulate low-attenuation areas in CT imaging for pulmonary emphysema quantification.

    Science.gov (United States)

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2017-09-01

    To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.

  7. Thick tissue diffusion model with binding to optimize topical staining in fluorescence breast cancer margin imaging

    Science.gov (United States)

    Xu, Xiaochun; Kang, Soyoung; Navarro-Comes, Eric; Wang, Yu; Liu, Jonathan T. C.; Tichauer, Kenneth M.

    2018-03-01

    Intraoperative tumor/surgical margin assessment is required to achieve higher tumor resection rate in breast-conserving surgery. Though current histology provides incomparable accuracy in margin assessment, thin tissue sectioning and the limited field of view of microscopy makes histology too time-consuming for intraoperative applications. If thick tissue, wide-field imaging can provide an acceptable assessment of tumor cells at the surface of resected tissues, an intraoperative protocol can be developed to guide the surgery and provide immediate feedback for surgeons. Topical staining of margins with cancer-targeted molecular imaging agents has the potential to provide the sensitivity needed to see microscopic cancer on a wide-field image; however, diffusion and nonspecific retention of imaging agents in thick tissue can significantly diminish tumor contrast with conventional methods. Here, we present a mathematical model to accurately simulate nonspecific retention, binding, and diffusion of imaging agents in thick tissue topical staining to guide and optimize future thick tissue staining and imaging protocol. In order to verify the accuracy and applicability of the model, diffusion profiles of cancer targeted and untargeted (control) nanoparticles at different staining times in A431 tumor xenografts were acquired for model comparison and tuning. The initial findings suggest the existence of nonspecific retention in the tissue, especially at the tissue surface. The simulator can be used to compare the effect of nonspecific retention, receptor binding and diffusion under various conditions (tissue type, imaging agent) and provides optimal staining and imaging protocols for targeted and control imaging agent.

  8. Detecting Weather Radar Clutter by Information Fusion With Satellite Images and Numerical Weather Prediction Model Output

    DEFF Research Database (Denmark)

    Bøvith, Thomas; Nielsen, Allan Aasbjerg; Hansen, Lars Kai

    2006-01-01

    A method for detecting clutter in weather radar images by information fusion is presented. Radar data, satellite images, and output from a numerical weather prediction model are combined and the radar echoes are classified using supervised classification. The presented method uses indirect...... information on precipitation in the atmosphere from Meteosat-8 multispectral images and near-surface temperature estimates from the DMI-HIRLAM-S05 numerical weather prediction model. Alternatively, an operational nowcasting product called 'Precipitating Clouds' based on Meteosat-8 input is used. A scale...

  9. A model of destination image promotion with a case study of Nanjing, P. R. China

    Science.gov (United States)

    Xiang Li; Hans Vogelsong

    2003-01-01

    Destination image has long been a popular research topic in tourism studies. However, methods used to integrate image in real marketing practice and evaluating the market performance in a systematic way are still puzzling to practitioners. A destination image promotion model is proposed in this paper as an effort to solve the problem. The roles of some major factors...

  10. PET/SPECT/CT multimodal imaging in a transgenic mouse model of breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Boisgard, R.; Alberini, J.L.; Jego, B.; Siquier, K.; Theze, B.; Guillermet, S.; Tavitian, B. [Service Hospitalier Frederic Joliot, Institut d' Imagerie BioMedicale, CEA, 91 - Orsay (France); Inserm, U803, 91 - Orsay (France)

    2008-02-15

    Background. - In the therapy monitoring of breast cancer, conventional imaging methods include ultrasound, mammography, CT and MRI, which are essentially based on tumor size modifications. However these modifications represent a late consequence of the biological response and fail to differentiate scar or necrotic tissue from residual viable tumoral tissue. Therefore, a current objective is to develop tools able to predict early response to treatment. Positron Emission Tomography (PET) and Single Photon Emission Computerized Tomography (SPECT) are imaging modalities able to provide extremely sensitive quantitative molecular data and are widely used in humans and animals. Results. - Mammary epithelial cells of female transgenic mice expressing the polyoma middle T onco-protein (Py M.T.), undergo four distinct stages of tumour progression, from pre malignant to malignant stages. Stages are identifiable in the mammary tissue and can lead to the development of distant metastases Longitudinal studies by dynamic whole body acquisitions by multimodal imaging including PET, SPECT and Computed Tomography (CT) allow following the tumoral evolution in Py M.T. mice in comparison with the histopathological analysis. At four weeks of age, mammary hyperplasia was identified by histopathology, but no abnormalities were found by palpation or detected by PET with 2-deoxy-2-[{sup 18}F]fluoro-D-glucose. Such as in some human mammary cancers, the sodium iodide sym-porter (N.I.S.) in tumoral mammary epithelial cells is expressed in this mouse model. In order to investigate the expression of N.I.S. in the Py M.T. mice mammary tumours, [{sup 99m}Tc]TcO{sub 4} imaging was performed with a dedicated SPECT/CT system camera (B.I.O.S.P.A.C.E. Gamma Imager/CT). Local uptake of [{sup 99m}Tc]TcO{sub 4} was detected as early as four weeks of age. The efficacy of chemotherapy was evaluated in this mouse model using a conventional regimen (Doxorubicine, 100 mg/ kg) administered weekly from nine to

  11. Imaging techniques for visualizing and phenotyping congenital heart defects in murine models.

    Science.gov (United States)

    Liu, Xiaoqin; Tobita, Kimimasa; Francis, Richard J B; Lo, Cecilia W

    2013-06-01

    Mouse model is ideal for investigating the genetic and developmental etiology of congenital heart disease. However, cardiovascular phenotyping for the precise diagnosis of structural heart defects in mice remain challenging. With rapid advances in imaging techniques, there are now high throughput phenotyping tools available for the diagnosis of structural heart defects. In this review, we discuss the efficacy of four different imaging modalities for congenital heart disease diagnosis in fetal/neonatal mice, including noninvasive fetal echocardiography, micro-computed tomography (micro-CT), micro-magnetic resonance imaging (micro-MRI), and episcopic fluorescence image capture (EFIC) histopathology. The experience we have gained in the use of these imaging modalities in a large-scale mouse mutagenesis screen have validated their efficacy for congenital heart defect diagnosis in the tiny hearts of fetal and newborn mice. These cutting edge phenotyping tools will be invaluable for furthering our understanding of the developmental etiology of congenital heart disease. Copyright © 2013 Wiley Periodicals, Inc.

  12. An Image-based Micro-continuum Pore-scale Model for Gas Transport in Organic-rich Shale

    Science.gov (United States)

    Guo, B.; Tchelepi, H.

    2017-12-01

    Gas production from unconventional source rocks, such as ultra-tight shales, has increased significantly over the past decade. However, due to the extremely small pores ( 1-100 nm) and the strong material heterogeneity, gas flow in shale is still not well understood and poses challenges for predictive field-scale simulations. In recent years, digital rock analysis has been applied to understand shale gas transport at the pore-scale. An issue with rock images (e.g. FIB-SEM, nano-/micro-CT images) is the so-called "cutoff length", i.e., pores and heterogeneities below the resolution cannot be resolved, which leads to two length scales (resolved features and unresolved sub-resolution features) that are challenging for flow simulations. Here we develop a micro-continuum model, modified from the classic Darcy-Brinkman-Stokes framework, that can naturally couple the resolved pores and the unresolved nano-porous regions. In the resolved pores, gas flow is modeled with Stokes equation. In the unresolved regions where the pore sizes are below the image resolution, we develop an apparent permeability model considering non-Darcy flow at the nanoscale including slip flow, Knudsen diffusion, adsorption/desorption, surface diffusion, and real gas effect. The end result is a micro-continuum pore-scale model that can simulate gas transport in 3D reconstructed shale images. The model has been implemented in the open-source simulation platform OpenFOAM. In this paper, we present case studies to demonstrate the applicability of the model, where we use 3D segmented FIB-SEM and nano-CT shale images that include four material constituents: organic matter, clay, granular mineral, and pore. In addition to the pore structure and the distribution of the material constituents, we populate the model with experimental measurements (e.g. size distribution of the sub-resolution pores from nitrogen adsorption) and parameters from the literature and identify the relative importance of different

  13. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    Science.gov (United States)

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  14. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Yu Guo

    2014-01-01

    Full Text Available The combination of positron emission tomography (PET and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice’s similarity coefficient (DSC was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  15. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    Science.gov (United States)

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  16. A generative Bezier curve model for surf-zone tracking in coastal image sequences

    CSIR Research Space (South Africa)

    Burke, Michael G

    2017-09-01

    Full Text Available This work introduces a generative Bezier curve model suitable for surf-zone curve tracking in coastal image sequences. The model combines an adaptive curve parametrised by control points governed by local random walks with a global sinusoidal motion...

  17. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Science.gov (United States)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  18. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    Energy Technology Data Exchange (ETDEWEB)

    Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es

    2010-04-07

    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

  19. Improving fault image by determination of optimum seismic survey parameters using ray-based modeling

    Science.gov (United States)

    Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali

    2018-06-01

    In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.

  20. Simulations, Imaging, and Modeling: A Unique Theme for an Undergraduate Research Program in Biomechanics.

    Science.gov (United States)

    George, Stephanie M; Domire, Zachary J

    2017-07-01

    As the reliance on computational models to inform experiments and evaluate medical devices grows, the demand for students with modeling experience will grow. In this paper, we report on the 3-yr experience of a National Science Foundation (NSF) funded Research Experiences for Undergraduates (REU) based on the theme simulations, imaging, and modeling in biomechanics. While directly applicable to REU sites, our findings also apply to those creating other types of summer undergraduate research programs. The objective of the paper is to examine if a theme of simulations, imaging, and modeling will improve students' understanding of the important topic of modeling, provide an overall positive research experience, and provide an interdisciplinary experience. The structure of the program and the evaluation plan are described. We report on the results from 25 students over three summers from 2014 to 2016. Overall, students reported significant gains in the knowledge of modeling, research process, and graduate school based on self-reported mastery levels and open-ended qualitative responses. This theme provides students with a skill set that is adaptable to other applications illustrating the interdisciplinary nature of modeling in biomechanics. Another advantage is that students may also be able to continue working on their project following the summer experience through network connections. In conclusion, we have described the successful implementation of the theme simulation, imaging, and modeling for an REU site and the overall positive response of the student participants.

  1. Aircraft Segmentation in SAR Images Based on Improved Active Shape Model

    Science.gov (United States)

    Zhang, X.; Xiong, B.; Kuang, G.

    2018-04-01

    In SAR image interpretation, aircrafts are the important targets arousing much attention. However, it is far from easy to segment an aircraft from the background completely and precisely in SAR images. Because of the complex structure, different kinds of electromagnetic scattering take place on the aircraft surfaces. As a result, aircraft targets usually appear to be inhomogeneous and disconnected. It is a good idea to extract an aircraft target by the active shape model (ASM), since combination of the geometric information controls variations of the shape during the contour evolution. However, linear dimensionality reduction, used in classic ACM, makes the model rigid. It brings much trouble to segment different types of aircrafts. Aiming at this problem, an improved ACM based on ISOMAP is proposed in this paper. ISOMAP algorithm is used to extract the shape information of the training set and make the model flexible enough to deal with different aircrafts. The experiments based on real SAR data shows that the proposed method achieves obvious improvement in accuracy.

  2. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    International Nuclear Information System (INIS)

    Hurwitz, Martina; Williams, Christopher L; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G; Mak, Raymond H; Lewis, John H

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes. (paper)

  3. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    Science.gov (United States)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  4. Finite magnetic relaxation in x-space magnetic particle imaging: Comparison of measurements and ferrohydrodynamic models.

    Science.gov (United States)

    Dhavalikar, R; Hensley, D; Maldonado-Camargo, L; Croft, L R; Ceron, S; Goodwill, P W; Conolly, S M; Rinaldi, C

    2016-08-03

    Magnetic Particle Imaging (MPI) is an emerging tomographic imaging technology that detects magnetic nanoparticle tracers by exploiting their non-linear magnetization properties. In order to predict the behavior of nanoparticles in an imager, it is possible to use a non-imaging MPI relaxometer or spectrometer to characterize the behavior of nanoparticles in a controlled setting. In this paper we explore the use of ferrohydrodynamic magnetization equations for predicting the response of particles in an MPI relaxometer. These include a magnetization equation developed by Shliomis (Sh) which has a constant relaxation time and a magnetization equation which uses a field-dependent relaxation time developed by Martsenyuk, Raikher and Shliomis (MRSh). We compare the predictions from these models with measurements and with the predictions based on the Langevin function that assumes instantaneous magnetization response of the nanoparticles. The results show good qualitative and quantitative agreement between the ferrohydrodynamic models and the measurements without the use of fitting parameters and provide further evidence of the potential of ferrohydrodynamic modeling in MPI.

  5. THE USE OF MOBILE LASER SCANNING DATA AND UNMANNED AERIAL VEHICLE IMAGES FOR 3D MODEL RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    L. Zhu

    2013-08-01

    Full Text Available The increasing availability in multiple data sources acquired by different sensor platforms has provided the great advantages for desired result achievement. This paper proposes the use of both mobile laser scanning (MLS data and Unmanned Aerial Vehicle (UAV images for 3D model reconstruction. Due to no available exterior orientation parameters for UAV images, the first task is to georeference these images to 3D points. In order to fast and accurate acquire 3D points which are also easy to be found the corresponding locations on UAV images, automated pole extraction from MLS was developed. After georeferencing UAV images, building roofs are acquired from those images and building walls are extracted from MLS data. The roofs and the walls are combined to achieve the complete building models.

  6. Correlation between model observer and human observer performance in CT imaging when lesion location is uncertain

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Shuai; Yu, Lifeng; Zhang, Yi; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States); Carter, Rickey [Department of Biostatistics, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States); Toledano, Alicia Y. [Biostatistics Consulting, LLC, 10606 Wheatley Street, Kensington, Maryland 20895 (United States)

    2013-08-15

    Purpose: The purpose of this study was to investigate the correlation between model observer and human observer performance in CT imaging for the task of lesion detection and localization when the lesion location is uncertain.Methods: Two cylindrical rods (3-mm and 5-mm diameters) were placed in a 35 × 26 cm torso-shaped water phantom to simulate lesions with −15 HU contrast at 120 kV. The phantom was scanned 100 times on a 128-slice CT scanner at each of four dose levels (CTDIvol = 5.7, 11.4, 17.1, and 22.8 mGy). Regions of interest (ROIs) around each lesion were extracted to generate images with signal-present, with each ROI containing 128 × 128 pixels. Corresponding ROIs of signal-absent images were generated from images without lesion mimicking rods. The location of the lesion (rod) in each ROI was randomly distributed by moving the ROIs around each lesion. Human observer studies were performed by having three trained observers identify the presence or absence of lesions, indicating the lesion location in each image and scoring confidence for the detection task on a 6-point scale. The same image data were analyzed using a channelized Hotelling model observer (CHO) with Gabor channels. Internal noise was added to the decision variables for the model observer study. Area under the curve (AUC) of ROC and localization ROC (LROC) curves were calculated using a nonparametric approach. The Spearman's rank order correlation between the average performance of the human observers and the model observer performance was calculated for the AUC of both ROC and LROC curves for both the 3- and 5-mm diameter lesions.Results: In both ROC and LROC analyses, AUC values for the model observer agreed well with the average values across the three human observers. The Spearman's rank order correlation values for both ROC and LROC analyses for both the 3- and 5-mm diameter lesions were all 1.0, indicating perfect rank ordering agreement of the figures of merit (AUC

  7. A new assessment model for tumor heterogeneity analysis with [18]F-FDG PET images.

    Science.gov (United States)

    Wang, Ping; Xu, Wengui; Sun, Jian; Yang, Chengwen; Wang, Gang; Sa, Yu; Hu, Xin-Hua; Feng, Yuanming

    2016-01-01

    It has been shown that the intratumor heterogeneity can be characterized with quantitative analysis of the [18]F-FDG PET image data. The existing models employ multiple parameters for feature extraction which makes it difficult to implement in clinical settings for the quantitative characterization. This article reports an easy-to-use and differential SUV based model for quantitative assessment of the intratumor heterogeneity from 3D [18]F-FDG PET image data. An H index is defined to assess tumor heterogeneity by summing voxel-wise distribution of differential SUV from the [18]F-FDG PET image data. The summation is weighted by the distance of SUV difference among neighboring voxels from the center of the tumor and can thus yield increased values for tumors with peripheral sub-regions of high SUV that often serves as an indicator of augmented malignancy. Furthermore, the sign of H index is used to differentiate the rate of change for volume averaged SUV from its center to periphery. The new model with the H index has been compared with a widely-used model of gray level co-occurrence matrix (GLCM) for image texture characterization with phantoms of different configurations and the [18]F-FDG PET image data of 6 lung cancer patients to evaluate its effectiveness and feasibility for clinical uses. The comparison of the H index and GLCM parameters with the phantoms demonstrate that the H index can characterize the SUV heterogeneity in all of 6 2D phantoms while only 1 GLCM parameter can do for 1 and fail to differentiate for other 2D phantoms. For the 8 3D phantoms, the H index can clearly differentiate all of them while the 4 GLCM parameters provide complicated patterns in the characterization. Feasibility study with the PET image data from 6 lung cancer patients show that the H index provides an effective single-parameter metric to characterize tumor heterogeneity in terms of the local SUV variation, and it has higher correlation with tumor volume change after

  8. Impact of Tourist Perceptions, Destination Image and Tourist Satisfaction on Destination Loyalty: A Conceptual Model

    Directory of Open Access Journals (Sweden)

    R Rajesh

    2013-07-01

    Full Text Available The objective this research paper is develops a destination loyalty theoretical model by using tourist perception, destination image and tourist satisfaction. These study analysis components, attributes, factor influencing the destination image and examine the tourist satisfaction and determinants of destination loyalty. This is a conceptual paper attempts at evaluating recent empirical on destination image, tourist satisfaction and loyalty. The conceptual framework model is developed on the basis of existing theoretical and empirical research in the field of destination marketing. The models include four constructs. Tourist Perception constructs has been influenced by factors like Historical and Cultural Attractions, Destination Affordability, Travel Environment, Natural Attractions, Entertainments and Infrastructure. Destination image construct has been influenced by factors like Infrastructure & Facilities, Heritage Attractions, Natural Made Attractions, Destination Safety & Cleanness, Friendly Local Community & Clam Atmosphere, Rejuvenation and Service Price and Affordability. The satisfaction construct has been influenced by factors like Entertainments, Destination Attractions and Atmosphere, Accommodation, Food, Transportation Services and Shopping. The destination loyalty construct has influenced by intentions to revisit, word of mouth promotion and recommending to others . The earlier study result reveals that tourist perception, destination image and tourist satisfaction directly influence destination loyalty. The outcomes of the study have significant managerial implications for destination marketing managers.

  9. PROBLEMS AND LIMITATIONS OF SATELLITE IMAGE ORIENTATION FOR DETERMINATION OF HEIGHT MODELS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2017-05-01

    Full Text Available The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC. The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3, WorldView-2 (WV2, Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs. The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the

  10. Problems and Limitations of Satellite Image Orientation for Determination of Height Models

    Science.gov (United States)

    Jacobsen, K.

    2017-05-01

    The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height

  11. Model-based respiratory motion compensation for emission tomography image reconstruction

    International Nuclear Information System (INIS)

    Reyes, M; Malandain, G; Koulibaly, P M; Gonzalez-Ballester, M A; Darcourt, J

    2007-01-01

    In emission tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations, imprecise diagnosis, impairing of fusion with other modalities, etc. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested, which lead to improvements over the spatial activity distribution in lungs lesions, but which have the disadvantages of requiring additional instrumentation or the need of discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion compensation directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the maximum likelihood expectation maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data

  12. Automatic construction of 3D-ASM intensity models by simulating image acquisition: application to myocardial gated SPECT studies.

    Science.gov (United States)

    Tobon-Gomez, Catalina; Butakoff, Constantine; Aguade, Santiago; Sukno, Federico; Moragas, Gloria; Frangi, Alejandro F

    2008-11-01

    Active shape models bear a great promise for model-based medical image analysis. Their practical use, though, is undermined due to the need to train such models on large image databases. Automatic building of point distribution models (PDMs) has been successfully addressed and a number of autolandmarking techniques are currently available. However, the need for strategies to automatically build intensity models around each landmark has been largely overlooked in the literature. This work demonstrates the potential of creating intensity models automatically by simulating image generation. We show that it is possible to reuse a 3D PDM built from computed tomography (CT) to segment gated single photon emission computed tomography (gSPECT) studies. Training is performed on a realistic virtual population where image acquisition and formation have been modeled using the SIMIND Monte Carlo simulator and ASPIRE image reconstruction software, respectively. The dataset comprised 208 digital phantoms (4D-NCAT) and 20 clinical studies. The evaluation is accomplished by comparing point-to-surface and volume errors against a proper gold standard. Results show that gSPECT studies can be successfully segmented by models trained under this scheme with subvoxel accuracy. The accuracy in estimated LV function parameters, such as end diastolic volume, end systolic volume, and ejection fraction, ranged from 90.0% to 94.5% for the virtual population and from 87.0% to 89.5% for the clinical population.

  13. MR angiography of stenosis and aneurysm models in the pulsatile flow: variation with imaging parameters and concentration of contrast media

    International Nuclear Information System (INIS)

    Park, Kyung Joo; Park, Jae Hyung; Lee, Hak Jong; Won, Hyung Jin; Lee, Dong Hyuk; Min, Byung Goo; Chang, Kee Hyun

    1997-01-01

    The image quality of magnetic resonance angiography (MRA) varies according to the imaging techniques applied and the parameters affected by blood flow patterns, as well as by the shape of the blood vessels. This study was designed to assess the influence on signal intensity and its distribution of the geometry of these vessels, the imaging parameters, and the concentration of contrast media in MRA of stenosis and aneurysm models. MRA was performed in stenosis and aneurysm models made of glass tubes, using pulsatile flow with viscosity and flow profile similar to those of blood. Slice and maximum intensity projection (MIP) images were obtained using various imaging techniques and parameters;there was variation in repetition time, flip angle, imaging planes, and concentrations of contrast media. On slice images of three-dimensional (3D) time-of-flight (TOF) techniques, flow signal intensity was measured at five locations in the models, and contrast ratio was calculated as the difference between flow signal intensity (SI) and background signal intensity (SIb) divided by background signal intensity or (SI-SIb)/SIb. MIP images obtained by various techniques and using various parameters were also analyzed, with emphasis in the stenosis model on demonstrated degree of stenosis, severity of signal void and image distortion, and in the aneurysm model, on degree of visualization, distortion of contour and distribution of signals. In 3D TOF, the shortest TR (36 msec) and the largest FA (50 deg ) resulted in the highest contrast ratio, but larger flip angles did not effectively demonstrate the demonstration of the peripheral part of the aneurysm. Loss of signal was most prominent in images of the stenosis model obtained with parallel or oblique planes to the flow direction. The two-dimensional TOF technique also caused signal void in stenosis, but precisely demonstrated the aneurysm, with dense opacification of the peripheral part. The phase contrast technique showed some

  14. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  15. COMPUTER RECONSTRUCTION OF A HUMAN LUNG MORPHOLOGY MODEL FROM MAGNETIC RESONANCE (MR) IMAGES

    Science.gov (United States)

    A mathematical description of the morphological structure of the lung is necessary for modeling and analysis of the deposition of inhaled aerosols. A morphological model of the lung boundary was generated from magnetic resonance (MR) images, with the goal of creating a frame...

  16. Fiducial-based fusion of 3D dental models with magnetic resonance imaging.

    Science.gov (United States)

    Abdi, Amir H; Hannam, Alan G; Fels, Sidney

    2018-04-16

    Magnetic resonance imaging (MRI) is widely used in study of maxillofacial structures. While MRI is the modality of choice for soft tissues, it fails to capture hard tissues such as bone and teeth. Virtual dental models, acquired by optical 3D scanners, are becoming more accessible for dental practice and are starting to replace the conventional dental impressions. The goal of this research is to fuse the high-resolution 3D dental models with MRI to enhance the value of imaging for applications where detailed analysis of maxillofacial structures are needed such as patient examination, surgical planning, and modeling. A subject-specific dental attachment was digitally designed and 3D printed based on the subject's face width and dental anatomy. The attachment contained 19 semi-ellipsoidal concavities in predetermined positions where oil-based ellipsoidal fiducial markers were later placed. The MRI was acquired while the subject bit on the dental attachment. The spatial position of the center of mass of each fiducial in the resultant MR Image was calculated by averaging its voxels' spatial coordinates. The rigid transformation to fuse dental models to MRI was calculated based on the least squares mapping of corresponding fiducials and solved via singular-value decomposition. The target registration error (TRE) of the proposed fusion process, calculated in a leave-one-fiducial-out fashion, was estimated at 0.49 mm. The results suggest that 6-9 fiducials suffice to achieve a TRE of equal to half the MRI voxel size. Ellipsoidal oil-based fiducials produce distinguishable intensities in MRI and can be used as registration fiducials. The achieved accuracy of the proposed approach is sufficient to leverage the merged 3D dental models with the MRI data for a finer analysis of the maxillofacial structures where complete geometry models are needed.

  17. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    Science.gov (United States)

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Optically Sectioned Imaging of Microvasculature of In-Vivo and Ex-Vivo Thick Tissue Models with Speckle-illumination HiLo Microscopy and HiLo Image Processing Implementation in MATLAB Architecture

    Science.gov (United States)

    Suen, Ricky Wai

    The work described in this thesis covers the conversion of HiLo image processing into MATLAB architecture and the use of speckle-illumination HiLo microscopy for use of ex-vivo and in-vivo imaging of thick tissue models. HiLo microscopy is a wide-field fluorescence imaging technique and has been demonstrated to produce optically sectioned images comparable to confocal in thin samples. The imaging technique was developed by Jerome Mertz and the Boston University Biomicroscopy Lab and has been implemented in our lab as a stand-alone optical setup and a modification to a conventional fluorescence microscope. Speckle-illumination HiLo microscopy combines two images taken under speckle-illumination and standard uniform-illumination to generate an optically sectioned image that reject out-of-focus fluorescence. The evaluated speckle contrast in the images is used as a weighting function where elements that move out-of-focus have a speckle contrast that decays to zero. The experiments shown here demonstrate the capability of our HiLo microscopes to produce optically-sectioned images of the microvasculature of ex-vivo and in-vivo thick tissue models. The HiLo microscope were used to image the microvasculature of ex-vivo mouse heart sections prepared for optical histology and the microvasculature of in-vivo rodent dorsal window chamber models. Studies in label-free surface profiling with HiLo microscopy is also presented.

  19. Reprint of Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    Science.gov (United States)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-04-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  20. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks.

    Science.gov (United States)

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-04-26

    With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction.

  1. Geometric modelling of a make mandible utilising CT imaging

    International Nuclear Information System (INIS)

    Baker, N.; Basu, A.; McLean, A.G.; Jamieson, D.; Jonkman, M.

    1996-01-01

    Full text: The mandible is one of the most important and frequently used bones in the human body. It is responsible for basic actions such as mastication, communication and swallowing. It houses and provides protection for the tongue, teeth and salivary glands. The mandible is unique in that it has two anatomically identical articulations, each providing the same function. Both articulations, however, rarely have synchronous force and motion characteristics. The mandible is the only moveable bone in the skull and is capable of the following motions: depression - lowering the mandible, as in yawning, elevation - raising the mandible, protraction - thrusting the jaw forward, retraction - withdrawing the jaw posteriorly, and lateral deviation - sideways displacement in the transverse plane. The mandible is an irregular bone comprising a broad U shaped body with two ascending rami. The rami are quadrilateral plate like structures with lateral sides which are nearly flat. The mandible is subjected to repetitive loading and is susceptible to wear at its articulations, cyclic fatigue and dislocation. Despite the importance of the mandible little is understood about its mechanical properties and loading parameters. The purpose of this study was to create a three dimensional geometric model of a human mandible based on anatomical data. A 21 year old male with no history of mandible fracture or temporomandibular joint dysfunction was selected. The mandible was non-invasively imaged by Computed Tomography (CT). The subject was imaged lying on his back with the head supported and immobilised by a U shaped head rest. Seventeen parallel cross-sectional images oblique to the transverse plane were constructed. Cortical and cancellous bone boundaries were manually digitised for every image using a Science Accessories Corporation GP-9 digitiser linked to an IBM 286 SX personal computer. The data was transferred to a global coordinate system and entered into MSC/PATRAN finite element

  2. A model of selective visual attention for a stereo pair of images

    Science.gov (United States)

    Park, Min Chul; Kim, Sung Kyu; Son, Jung-Young

    2005-11-01

    Human visual attention system has a remarkable ability to interpret complex scenes with the ease and simplicity by selecting or focusing on a small region of visual field without scanning the whole images. In this paper, a novel selective visual attention model by using 3D image display system for a stereo pair of images is proposed. It is based on the feature integration theory and locates ROI(region of interest) or FOA(focus of attention). The disparity map obtained from a stereo pair of images is exploited as one of spatial visual features to form a set of topographic feature maps in our approach. Though the true human cognitive mechanism on the analysis and integration process might be different from our assumption the proposed attention system matches well with the results found by human observers.

  3. Using human brain imaging studies as a guide towards animal models of schizophrenia

    Science.gov (United States)

    BOLKAN, Scott S.; DE CARVALHO, Fernanda D.; KELLENDONK, Christoph

    2015-01-01

    Schizophrenia is a heterogeneous and poorly understood mental disorder that is presently defined solely by its behavioral symptoms. Advances in genetic, epidemiological and brain imaging techniques in the past half century, however, have significantly advanced our understanding of the underlying biology of the disorder. In spite of these advances clinical research remains limited in its power to establish the causal relationships that link etiology with pathophysiology and symptoms. In this context, animal models provide an important tool for causally testing hypotheses about biological processes postulated to be disrupted in the disorder. While animal models can exploit a variety of entry points towards the study of schizophrenia, here we describe an approach that seeks to closely approximate functional alterations observed with brain imaging techniques in patients. By modeling these intermediate pathophysiological alterations in animals, this approach offers an opportunity to (1) tightly link a single functional brain abnormality with its behavioral consequences, and (2) to determine whether a single pathophysiology can causally produce alterations in other brain areas that have been described in patients. In this review we first summarize a selection of well-replicated biological abnormalities described in the schizophrenia literature. We then provide examples of animal models that were studied in the context of patient imaging findings describing enhanced striatal dopamine D2 receptor function, alterations in thalamo-prefrontal circuit function, and metabolic hyperfunction of the hippocampus. Lastly, we discuss the implications of findings from these animal models for our present understanding of schizophrenia, and consider key unanswered questions for future research in animal models and human patients. PMID:26037801

  4. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    Science.gov (United States)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  5. Ghost imaging and its visibility with partially coherent elliptical Gaussian Schell-model beams

    International Nuclear Information System (INIS)

    Luo, Meilan; Zhu, Weiting; Zhao, Daomu

    2015-01-01

    The performances of the ghost image and the visibility with partially coherent elliptical Gaussian Schell-model beams have been studied. In that case we have derived the condition under which the goal ghost image is achievable. Furthermore, the visibility is assessed in terms of the parameters related to the source to find that the visibility reduces with the increase of the beam size, while it is a monotonic increasing function of the transverse coherence length. More specifically, it is found that the inequalities of the source sizes in x and y directions, as well as the transverse coherence lengths, play an important role in the ghost image and the visibility. - Highlights: • We studied the ghost image and visibility with partially coherent EGSM beams. • We derived the condition under which the goal ghost image is achievable. • The visibility is assessed in terms of the parameters related to the source. • The source sizes and coherence lengths play role in the ghost image and visibility.

  6. Recent developments in imaging system assessment methodology, FROC analysis and the search model.

    Science.gov (United States)

    Chakraborty, Dev P

    2011-08-21

    A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search-model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.

  7. Recent developments in imaging system assessment methodology, FROC analysis and the search model

    International Nuclear Information System (INIS)

    Chakraborty, Dev P.

    2011-01-01

    A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.

  8. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    Science.gov (United States)

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  9. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    Directory of Open Access Journals (Sweden)

    Ying Cai

    2012-09-01

    Full Text Available In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT, the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3% and overall (92.0%–93.1% accuracies. Our

  10. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  11. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  12. Image processor of model-based vision system for assembly robots

    International Nuclear Information System (INIS)

    Moribe, H.; Nakano, M.; Kuno, T.; Hasegawa, J.

    1987-01-01

    A special purpose image preprocessor for the visual system of assembly robots has been developed. The main function unit is composed of lookup tables to utilize the advantage of semiconductor memory for large scale integration, high speed and low price. More than one unit may be operated in parallel since it is designed on the standard IEEE 796 bus. The operation time of the preprocessor in line segment extraction is usually 200 ms per 500 segments, though it differs according to the complexity of scene image. The gray-scale visual system supported by the model-based analysis program using the extracted line segments recognizes partially visible or overlapping industrial workpieces, and detects these locations and orientations

  13. Hemispherical reflectance model for passive images in an outdoor environment.

    Science.gov (United States)

    Kim, Charles C; Thai, Bea; Yamaoka, Neil; Aboutalib, Omar

    2015-05-01

    We present a hemispherical reflectance model for simulating passive images in an outdoor environment where illumination is provided by natural sources such as the sun and the clouds. While the bidirectional reflectance distribution function (BRDF) accurately produces radiance from any objects after the illumination, using the BRDF in calculating radiance requires double integration. Replacing the BRDF by hemispherical reflectance under the natural sources transforms the double integration into a multiplication. This reduces both storage space and computation time. We present the formalism for the radiance of the scene using hemispherical reflectance instead of BRDF. This enables us to generate passive images in an outdoor environment taking advantage of the computational and storage efficiencies. We show some examples for illustration.

  14. Satellite image analysis and a hybrid ESSS/ANN model to forecast solar irradiance in the tropics

    International Nuclear Information System (INIS)

    Dong, Zibo; Yang, Dazhi; Reindl, Thomas; Walsh, Wilfred M.

    2014-01-01

    Highlights: • Satellite image analysis is performed and cloud cover index is classified using self-organizing maps (SOM). • The ESSS model is used to forecast cloud cover index. • Solar irradiance is estimated using multi-layer perceptron (MLP). • The proposed model shows better accuracy than other investigated models. - Abstract: We forecast hourly solar irradiance time series using satellite image analysis and a hybrid exponential smoothing state space (ESSS) model together with artificial neural networks (ANN). Since cloud cover is the major factor affecting solar irradiance, cloud detection and classification are crucial to forecast solar irradiance. Geostationary satellite images provide cloud information, allowing a cloud cover index to be derived and analysed using self-organizing maps (SOM). Owing to the stochastic nature of cloud generation in tropical regions, the ESSS model is used to forecast cloud cover index. Among different models applied in ANN, we favour the multi-layer perceptron (MLP) to derive solar irradiance based on the cloud cover index. This hybrid model has been used to forecast hourly solar irradiance in Singapore and the technique is found to outperform traditional forecasting models

  15. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from the co...

  16. Variational PDE Models in Image Processing

    National Research Council Canada - National Science Library

    Chan, Tony F; Shen, Jianhong; Vese, Luminita

    2002-01-01

    .... These include astronomy and aerospace exploration, medical imaging, molecular imaging, computer graphics, human and machine vision, telecommunication, auto-piloting, surveillance video, and biometric...

  17. Mid-IR Imaging of Orion BN/KL: Modeling of Physical Conditions and Energy Balance

    Science.gov (United States)

    Gezari, Daniel; Varosi, Frank; Dwek, Eli; Danchi, William C.; Tan, Jonathan; Okumura, Shin-ichiro

    2016-01-01

    We have modeled two mid-infrared imaging photometry data sets to determine the spatial distribution of physical conditions in the BN/KL (Becklin-Neugebauer / Kleinmann-Low) infrared complex. We observed the BN/KL region using the 10-meter Keck I telescope and the LWS (Living With a Star) in the direct imaging mode, over a 13 inch by 19 inch field . We also modeled images obtained with COMICS (Cooled Mid-Infrared Camera and Spectrometer, Kataza et al. 2000) at the 8.2-meter SUBARU telescope, over a total field of view [which] is 31 inches by 41 inches in a total of nine bands: 7.8, 8.8, 9.7, 10.5, 11.7, 12.4, 18.5, 20.8 and 24.8 microns with 1-micron bandwidth interference filters.

  18. Using optical remote sensing model to estimate oil slick thickness based on satellite image

    International Nuclear Information System (INIS)

    Lu, Y C; Tian, Q J; Lyu, C G; Fu, W X; Han, W C

    2014-01-01

    An optical remote sensing model has been established based on two-beam interference theory to estimate marine oil slick thickness. Extinction coefficient and normalized reflectance of oil are two important parts in this model. Extinction coefficient is an important inherent optical property and will not vary with the background reflectance changed. Normalized reflectance can be used to eliminate the background differences between in situ measured spectra and remotely sensing image. Therefore, marine oil slick thickness and area can be estimated and mapped based on optical remotely sensing image and extinction coefficient

  19. Analysis and modeling of electronic portal imaging exit dose measurements

    International Nuclear Information System (INIS)

    Pistorius, S.; Yeboah, C.

    1995-01-01

    In spite of the technical advances in treatment planning and delivery in recent years, it is still unclear whether the recommended accuracy in dose delivery is being achieved. Electronic portal imaging devices, now in routine use in many centres, have the potential for quantitative dosimetry. As part of a project which aims to develop an expert-system based On-line Dosimetric Verification (ODV) system we have investigated and modelled the dose deposited in the detector of a video based portal imaging system. Monte Carlo techniques were used to simulate gamma and x-ray beams in homogeneous slab phantom geometries. Exit doses and energy spectra were scored as a function of (i) slab thickness, (ii) field size and (iii) the air gap between the exit surface and the detector. The results confirm that in order to accurately calculate the dose in the high atomic number Gd 2 O 2 S detector for a range of air gaps, field sizes and slab thicknesses both the magnitude of the primary and scattered components and their effective energy need to be considered. An analytic, convolution based model which attempts to do this is proposed. The results of the simulation and the ability of the model to represent these data will be presented and discussed. This model is used to show that, after training, a back-propagation feed-forward cascade correlation neural network has the ability to identify and recognise the cause of, significant dosimetric errors

  20. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  1. Automated drusen detection in retinal images using analytical modelling algorithms

    Directory of Open Access Journals (Sweden)

    Manivannan Ayyakkannu

    2011-07-01

    Full Text Available Abstract Background Drusen are common features in the ageing macula associated with exudative Age-Related Macular Degeneration (ARMD. They are visible in retinal images and their quantitative analysis is important in the follow up of the ARMD. However, their evaluation is fastidious and difficult to reproduce when performed manually. Methods This article proposes a methodology for Automatic Drusen Deposits Detection and quantification in Retinal Images (AD3RI by using digital image processing techniques. It includes an image pre-processing method to correct the uneven illumination and to normalize the intensity contrast with smoothing splines. The drusen detection uses a gradient based segmentation algorithm that isolates drusen and provides basic drusen characterization to the modelling stage. The detected drusen are then fitted by Modified Gaussian functions, producing a model of the image that is used to evaluate the affected area. Twenty two images were graded by eight experts, with the aid of a custom made software and compared with AD3RI. This comparison was based both on the total area and on the pixel-to-pixel analysis. The coefficient of variation, the intraclass correlation coefficient, the sensitivity, the specificity and the kappa coefficient were calculated. Results The ground truth used in this study was the experts' average grading. In order to evaluate the proposed methodology three indicators were defined: AD3RI compared to the ground truth (A2G; each expert compared to the other experts (E2E and a standard Global Threshold method compared to the ground truth (T2G. The results obtained for the three indicators, A2G, E2E and T2G, were: coefficient of variation 28.8 %, 22.5 % and 41.1 %, intraclass correlation coefficient 0.92, 0.88 and 0.67, sensitivity 0.68, 0.67 and 0.74, specificity 0.96, 0.97 and 0.94, and kappa coefficient 0.58, 0.60 and 0.49, respectively. Conclusions The gradings produced by AD3RI obtained an agreement

  2. Modeling the properties of closed-cell cellular materials from tomography images using finite shell elements

    International Nuclear Information System (INIS)

    Caty, O.; Maire, E.; Youssef, S.; Bouchet, R.

    2008-01-01

    Closed-cell cellular materials exhibit several interesting properties. These properties are, however, very difficult to simulate and understand from the knowledge of the cellular microstructure. This problem is mostly due to the highly complex organization of the cells and to their very fine walls. X-ray tomography can produce three-dimensional (3-D) images of the structure, enabling one to visualize locally the damage of the cell walls that would result in the structure collapsing. These data could be used for meshing with continuum elements of the structure for finite element (FE) calculations. But when the density is very low, the walls are fine and the meshes based on continuum elements are not suitable to represent accurately the structure while preserving the representativeness of the model in terms of cell size. This paper presents a shell FE model obtained from tomographic 3-D images that allows bigger volumes of low-density closed-cell cellular materials to be calculated. The model is enriched by direct thickness measurement on the tomographic images. The values measured are ascribed to the shell elements. To validate and use the model, a structure composed of stainless steel hollow spheres is firstly compressed and scanned to observe local deformations. The tomographic data are also meshed with shells for a FE calculation. The convergence of the model is checked and its performance is compared with a continuum model. The global behavior is compared with the measures of the compression test. At the local scale, the model allows the local stress and strain field to be calculated. The calculated deformed shape is compared with the deformed tomographic images

  3. In vivo bioluminescence imaging using orthotopic xenografts towards patient's derived-xenograft Medulloblastoma models.

    Science.gov (United States)

    Asadzadeh, Fatemeh; Ferrucci, Veronica; DE Antonellis, Pasqualino; Zollo, Massimo

    2017-03-01

    Medulloblastoma is a cerebellar neoplasia of the central nervous system. Four molecular subgrups have been identified (MBWNT, MBSHH, MBgroup3 and MBgroup4) with distinct genetics and clinical outcome. Among these, MBgroup3-4 are highly metastatic with the worst prognosis. The current standard therapy includes surgery, radiation and chemotherapy. Thus, specific treatments adapted to cure those different molecular subgroups are needed. The use of orthotopic xenograft models, together with the non-invasive in vivo biolumiscence imaging (BLI) technology, is emerging during preclinical studies to test novel therapeutics for medulloblastoma treatment. Orthotopic MB xenografts were performed by injection of Daoy-luc cells, that had been previously infected with lentiviral particles to stably express luciferase gene, into the fourth right ventricle of the cerebellum of ten nude mice. For the implantation, specific stereotactic coordinates were used. Seven days after the implantation the mice were imaged by acquisitions of bioluminescence imaging (BLI) using IVIS 3D Illumina Imaging System (Xenogen). Tumor growth was evaluated by quantifying the bioluminescence signals using the integrated fluxes of photons within each area of interest using the Living Images Software Package 3.2 (Xenogen-Perkin Elmer). Finally, histological analysis using hematoxylin-eosin staining was performed to confirm the presence of tumorigenic cells into the cerebellum of the mice. We describe a method to use the in vivo bioluminescent imaging (BLI) showing the potential to be used to investigate the potential antitumorigenic effects of a drug for in vivo medulloblastoma treatment. We also discuss other studies in which this technology has been applied to obtain a more comprehensive knowledge of medulloblastoma using orthotopic xenograft mouse models. There is a need to develop patient's derived-xenograft (PDX) model systems to test novel drugs for medulloblastoma treatment within each molecular sub

  4. Strategy for magnetic resonance imaging of the head: results of a semi-empirical model. Part 1

    International Nuclear Information System (INIS)

    Droege, R.T.; Wiener, S.N.; Rzeszotarski, M.S.

    1984-01-01

    This paper is an introduction to lesion detection problems of MR. A mathematical model previously developed for normal anatomy has been extended to predict the appearance of any hypothetical lesion in magnetic (MR) images of the head. The model is applied to selected clinical images to demonstrate the loss of lesion visibility attributable to ''crossover'' and ''boundary effect.'' The model is also used to explain the origins of these problems, and to demonstrate that appropriate gray-scale manipulations can remedy these problems

  5. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Science.gov (United States)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  6. Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies

    Energy Technology Data Exchange (ETDEWEB)

    Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle [Telecom ParisTech, CNRS UMR 5141 LTCI, Institut TELECOM, Paris (France)

    2010-01-15

    Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)

  7. Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies

    International Nuclear Information System (INIS)

    Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle

    2010-01-01

    Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)

  8. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms

    DEFF Research Database (Denmark)

    Marbjerg, Gerd Høy; Brunskog, Jonas; Jeong, Cheol-Ho

    2015-01-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse...... radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber...

  9. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    International Nuclear Information System (INIS)

    Chen, Hsin-Chen; Yue, Yaofeng; Sun, Mingui; Jia, Wenyan; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D

    2013-01-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. (paper)

  10. Historical Single Image-Based Modeling: The Case of Gobierna Tower, Zamora (Spain

    Directory of Open Access Journals (Sweden)

    Jesús Garcia-Gago

    2014-01-01

    Full Text Available Historical perspective images have been proved to be very useful to properly provide a dimensional analysis of buildings façades or even to generate a pseudo-3D reconstruction based on rectified images of the whole structure. In this paper, the case of Gobierna Tower (Zamora, Spain is analyzed from a historical single image-based modeling approach. In particular, a bottom-up approach, which takes advantage from the perspective of the image, the existence of the three vanishing points and the usual geometric constraints (i.e., planarity, orthogonality, and parallelism is applied for the dimensional analysis of a destroyed historical building. Results were compared with ground truth measurements existing in a historical topographical surveying obtaining deviations of about 1%.

  11. Topology preserving non-rigid image registration using time-varying elasticity model for MRI brain volumes.

    Science.gov (United States)

    Ahmad, Sahar; Khan, Muhammad Faisal

    2015-12-01

    In this paper, we present a new non-rigid image registration method that imposes a topology preservation constraint on the deformation. We propose to incorporate the time varying elasticity model into the deformable image matching procedure and constrain the Jacobian determinant of the transformation over the entire image domain. The motion of elastic bodies is governed by a hyperbolic partial differential equation, generally termed as elastodynamics wave equation, which we propose to use as a deformation model. We carried out clinical image registration experiments on 3D magnetic resonance brain scans from IBSR database. The results of the proposed registration approach in terms of Kappa index and relative overlap computed over the subcortical structures were compared against the existing topology preserving non-rigid image registration methods and non topology preserving variant of our proposed registration scheme. The Jacobian determinant maps obtained with our proposed registration method were qualitatively and quantitatively analyzed. The results demonstrated that the proposed scheme provides good registration accuracy with smooth transformations, thereby guaranteeing the preservation of topology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Experimental protocols for behavioral imaging: seeing animal models of drug abuse in a new light.

    Science.gov (United States)

    Aarons, Alexandra R; Talan, Amanda; Schiffer, Wynne K

    2012-01-01

    Behavioral neuroimaging is a rapidly evolving discipline that represents a marriage between the fields of behavioral neuroscience and preclinical molecular imaging. This union highlights the changing role of imaging in translational research. Techniques developed for humans are now widely applied in the study of animal models of brain disorders such as drug addiction. Small animal or preclinical imaging allows us to interrogate core features of addiction from both behavioral and biological endpoints. Snapshots of brain activity allow us to better understand changes in brain function and behavior associated with initial drug exposure, the emergence of drug escalation, and repeated bouts of drug withdrawal and relapse. Here we review the development and validation of new behavioral imaging paradigms and several clinically relevant radiotracers used to capture dynamic molecular events in behaving animals. We will discuss ways in which behavioral imaging protocols can be optimized to increase throughput and quantitative methods. Finally, we discuss our experience with the practical aspects of behavioral neuroimaging, so investigators can utilize effective animal models to better understand the addicted brain and behavior.

  13. Quantitative Assessment of Optical Coherence Tomography Imaging Performance with Phantom-Based Test Methods And Computational Modeling

    Science.gov (United States)

    Agrawal, Anant

    Optical coherence tomography (OCT) is a powerful medical imaging modality that uniquely produces high-resolution cross-sectional images of tissue using low energy light. Its clinical applications and technological capabilities have grown substantially since its invention about twenty years ago, but efforts have been limited to develop tools to assess performance of OCT devices with respect to the quality and content of acquired images. Such tools are important to ensure information derived from OCT signals and images is accurate and consistent, in order to support further technology development, promote standardization, and benefit public health. The research in this dissertation investigates new physical and computational models which can provide unique insights into specific performance characteristics of OCT devices. Physical models, known as phantoms, are fabricated and evaluated in the interest of establishing standardized test methods to measure several important quantities relevant to image quality. (1) Spatial resolution is measured with a nanoparticle-embedded phantom and model eye which together yield the point spread function under conditions where OCT is commonly used. (2) A multi-layered phantom is constructed to measure the contrast transfer function along the axis of light propagation, relevant for cross-sectional imaging capabilities. (3) Existing and new methods to determine device sensitivity are examined and compared, to better understand the detection limits of OCT. A novel computational model based on the finite-difference time-domain (FDTD) method, which simulates the physics of light behavior at the sub-microscopic level within complex, heterogeneous media, is developed to probe device and tissue characteristics influencing the information content of an OCT image. This model is first tested in simple geometric configurations to understand its accuracy and limitations, then a highly realistic representation of a biological cell, the retinal

  14. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  15. An Emphasis on Perception: Teaching Image Formation Using a Mechanistic Model of Vision.

    Science.gov (United States)

    Allen, Sue; And Others

    An effective way to teach the concept of image is to give students a model of human vision which incorporates a simple mechanism of depth perception. In this study two almost identical versions of a curriculum in geometrical optics were created. One used a mechanistic, interpretive eye model, and in the other the eye was modeled as a passive,…

  16. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    Science.gov (United States)

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  17. Diffraction enhanced imaging: a simple model

    International Nuclear Information System (INIS)

    Zhu Peiping; Yuan Qingxi; Huang Wanxia; Wang Junyue; Shu Hang; Chen Bo; Liu Yijin; Li Enrong; Wu Ziyu

    2006-01-01

    Based on pinhole imaging and conventional x-ray projection imaging, a more general DEI (diffraction enhanced imaging) equation is derived using simple concepts in this paper. Not only can the new DEI equation explain all the same problems as with the DEI equation proposed by Chapman, but also some problems that cannot be explained with the old DEI equation, such as the noise background caused by small angle scattering diffracted by the analyser

  18. Diffraction enhanced imaging: a simple model

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Peiping; Yuan Qingxi; Huang Wanxia; Wang Junyue; Shu Hang; Chen Bo; Liu Yijin; Li Enrong; Wu Ziyu [Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2006-10-07

    Based on pinhole imaging and conventional x-ray projection imaging, a more general DEI (diffraction enhanced imaging) equation is derived using simple concepts in this paper. Not only can the new DEI equation explain all the same problems as with the DEI equation proposed by Chapman, but also some problems that cannot be explained with the old DEI equation, such as the noise background caused by small angle scattering diffracted by the analyser.

  19. NEPHRUS: model of intelligent multilayers expert system for evaluation of the renal system based on scintigraphic images analysis

    International Nuclear Information System (INIS)

    Silva, Jose W.E. da; Schirru, Roberto; Boasquevisque, Edson M.

    1997-01-01

    This work develops a prototype for the system model based on Artificial Intelligence devices able to perform functions related to scintigraphic image analysis of the urinary system. Criteria used by medical experts for analysis images obtained with 99m Tc+DTPA and/or 99m Tc+DMSA were modeled and a multi resolution diagnosis technique was implemented. Special attention was given to the programs user interface design. Human Factor Engineering techniques were considered so as to ally friendliness and robustness. Results obtained using Artificial Neural Networks for the qualitative image analysis and the knowledge model constructed shows the feasibility of Artificial Intelligence implementation that use 'inherent' abilities of each technique in the resolution of diagnosis image analysis problems. (author). 12 refs., 2 figs., 2 tabs

  20. Monitoring Prostate Tumor Growth in an Orthotopic Mouse Model Using Three-Dimensional Ultrasound Imaging Technique

    Directory of Open Access Journals (Sweden)

    Jie Ni

    2016-02-01

    Full Text Available Prostate cancer (CaP is the most commonly diagnosed and the second leading cause of death from cancer in males in USA. Prostate orthotopic mouse model has been widely used to study human CaP in preclinical settings. Measurement of changes in tumor size obtained from noninvasive diagnostic images is a standard method for monitoring responses to anticancer modalities. This article reports for the first time the usage of a three-dimensional (3D ultrasound system equipped with photoacoustic (PA imaging in monitoring longitudinal prostate tumor growth in a PC-3 orthotopic NODSCID mouse model (n = 8. Two-dimensional and 3D modes of ultrasound show great ability in accurately depicting the size and shape of prostate tumors. PA function on two-dimensional and 3D images showed average oxygen saturation and average hemoglobin concentration of the tumor. Results showed a good fit in representative exponential tumor growth curves (n = 3; r2 = 0.948, 0.955, and 0.953, respectively and a good correlation of tumor volume measurements performed in vivo with autopsy (n = 8, r = 0.95, P < .001. The application of 3D ultrasound imaging proved to be a useful imaging modality in monitoring tumor growth in an orthotopic mouse model, with advantages such as high contrast, uncomplicated protocols, economical equipment, and nonharmfulness to animals. PA mode also enabled display of blood oxygenation surrounding the tumor and tumor vasculature and angiogenesis, making 3D ultrasound imaging an ideal tool for preclinical cancer research.