WorldWideScience

Sample records for image-based modeling reveals

  1. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  2. Epithelial invasion outcompetes hypha development during Candida albicans infection as revealed by an image-based systems biology approach.

    Science.gov (United States)

    Mech, Franziska; Wilson, Duncan; Lehnert, Teresa; Hube, Bernhard; Thilo Figge, Marc

    2014-02-01

    Candida albicans is the most common opportunistic fungal pathogen of the human mucosal flora, frequently causing infections. The fungus is responsible for invasive infections in immunocompromised patients that can lead to sepsis. The yeast to hypha transition and invasion of host-tissue represent major determinants in the switch from benign colonizer to invasive pathogen. A comprehensive understanding of the infection process requires analyses at the quantitative level. Utilizing fluorescence microscopy with differential staining, we obtained images of C. albicans undergoing epithelial invasion during a time course of 6 h. An image-based systems biology approach, combining image analysis and mathematical modeling, was applied to quantify the kinetics of hyphae development, hyphal elongation, and epithelial invasion. The automated image analysis facilitates high-throughput screening and provided quantities that allow for the time-resolved characterization of the morphological and invasive state of fungal cells. The interpretation of these data was supported by two mathematical models, a kinetic growth model and a kinetic transition model, that were developed using differential equations. The kinetic growth model describes the increase in hyphal length and revealed that hyphae undergo mass invasion of epithelial cells following primary hypha formation. We also provide evidence that epithelial cells stimulate the production of secondary hyphae by C. albicans. Based on the kinetic transition model, the route of invasion was quantified in the state space of non-invasive and invasive fungal cells depending on their number of hyphae. This analysis revealed that the initiation of hyphae formation represents an ultimate commitment to invasive growth and suggests that in vivo, the yeast to hypha transition must be under exquisitely tight negative regulation to avoid the transition from commensal to pathogen invading the epithelium. © 2013 International Society for

  3. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  4. A 4DCT imaging-based breathing lung model with relative hysteresis

    Energy Technology Data Exchange (ETDEWEB)

    Miyawaki, Shinjiro; Choi, Sanghun [IIHR – Hydroscience & Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A. [Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [IIHR – Hydroscience & Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Mechanical and Industrial Engineering, The University of Iowa, 3131 Seamans Center, Iowa City, IA 52242 (United States)

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.

  5. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  6. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  7. Image-Based Models Using Crowdsourcing Strategy

    Directory of Open Access Journals (Sweden)

    Antonia Spanò

    2016-12-01

    Full Text Available The conservation and valorization of Cultural Heritage require an extensive documentation, both in properly historic-artistic terms and regarding the physical characteristics of position, shape, color, and geometry. With the use of digital photogrammetry that make acquisition of overlapping images for 3D photo modeling and with the development of dense and accurate 3D point models, it is possible to obtain high-resolution orthoprojections of surfaces.Recent years have seen a growing interest in crowdsourcing that holds in the field of the protection and dissemination of cultural heritage, in parallel there is an increasing awareness for contributing the generation of digital models with the immense wealth of images available on the web which are useful for documentation heritage.In this way, the availability and ease the automation of SfM (Structure from Motion algorithm enables the generation of digital models of the built heritage, which can be inserted positively in crowdsourcing processes. In fact, non-expert users can handle the technology in the process of acquisition, which today is one of the fundamental points to involve the wider public to the cultural heritage protection. To present the image based models and their derivatives that can be made from a great digital resource; the current approach is useful for the little-known heritage or not easily accessible buildings as an emblematic case study that was selected. It is the Vank Cathedral in Isfahan in Iran: the availability of accurate point clouds and reliable orthophotos are very convenient since the building of the Safavid epoch (cent. XVII-XVIII completely frescoed with the internal surfaces, which the architecture and especially the architectural decoration reach their peak.The experimental part of the paper explores also some aspects of usability of the digital output from the image based modeling methods. The availability of orthophotos allows and facilitates the iconographic

  8. Image-Based Geometric Modeling and Mesh Generation

    CERN Document Server

    2013-01-01

    As a new interdisciplinary research area, “image-based geometric modeling and mesh generation” integrates image processing, geometric modeling and mesh generation with finite element method (FEM) to solve problems in computational biomedicine, materials sciences and engineering. It is well known that FEM is currently well-developed and efficient, but mesh generation for complex geometries (e.g., the human body) still takes about 80% of the total analysis time and is the major obstacle to reduce the total computation time. It is mainly because none of the traditional approaches is sufficient to effectively construct finite element meshes for arbitrarily complicated domains, and generally a great deal of manual interaction is involved in mesh generation. This contributed volume, the first for such an interdisciplinary topic, collects the latest research by experts in this area. These papers cover a broad range of topics, including medical imaging, image alignment and segmentation, image-to-mesh conversion,...

  9. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  10. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  11. Image-based model of the spectrin cytoskeleton for red blood cell simulation.

    Science.gov (United States)

    Fai, Thomas G; Leo-Macias, Alejandra; Stokes, David L; Peskin, Charles S

    2017-10-01

    We simulate deformable red blood cells in the microcirculation using the immersed boundary method with a cytoskeletal model that incorporates structural details revealed by tomographic images. The elasticity of red blood cells is known to be supplied by both their lipid bilayer membranes, which resist bending and local changes in area, and their cytoskeletons, which resist in-plane shear. The cytoskeleton consists of spectrin tetramers that are tethered to the lipid bilayer by ankyrin and by actin-based junctional complexes. We model the cytoskeleton as a random geometric graph, with nodes corresponding to junctional complexes and with edges corresponding to spectrin tetramers such that the edge lengths are given by the end-to-end distances between nodes. The statistical properties of this graph are based on distributions gathered from three-dimensional tomographic images of the cytoskeleton by a segmentation algorithm. We show that the elastic response of our model cytoskeleton, in which the spectrin polymers are treated as entropic springs, is in good agreement with the experimentally measured shear modulus. By simulating red blood cells in flow with the immersed boundary method, we compare this discrete cytoskeletal model to an existing continuum model and predict the extent to which dynamic spectrin network connectivity can protect against failure in the case of a red cell subjected to an applied strain. The methods presented here could form the basis of disease- and patient-specific computational studies of hereditary diseases affecting the red cell cytoskeleton.

  12. Validation of Diagnostic Imaging Based on Repeat Examinations. An Image Interpretation Model

    International Nuclear Information System (INIS)

    Isberg, B.; Jorulf, H.; Thorstensen, Oe.

    2004-01-01

    Purpose: To develop an interpretation model, based on repeatedly acquired images, aimed at improving assessments of technical efficacy and diagnostic accuracy in the detection of small lesions. Material and Methods: A theoretical model is proposed. The studied population consists of subjects that develop focal lesions which increase in size in organs of interest during the study period. The imaging modality produces images that can be re-interpreted with high precision, e.g. conventional radiography, computed tomography, and magnetic resonance imaging. At least four repeat examinations are carried out. Results: The interpretation is performed in four or five steps: 1. Independent readers interpret the examinations chronologically without access to previous or subsequent films. 2. Lesions found on images at the last examination are included in the analysis, with interpretation in consensus. 3. By concurrent back-reading in consensus, the lesions are identified on previous images until they are so small that even in retrospect they are undetectable. The earliest examination at which included lesions appear is recorded, and the lesions are verified by their growth (imaging reference standard). Lesion size and other characteristics may be recorded. 4. Records made at step 1 are corrected to those of steps 2 and 3. False positives are recorded. 5. (Optional) Lesion type is confirmed by another diagnostic test. Conclusion: Applied on subjects with progressive disease, the proposed image interpretation model may improve assessments of technical efficacy and diagnostic accuracy in the detection of small focal lesions. The model may provide an accurate imaging reference standard as well as repeated detection rates and false-positive rates for tested imaging modalities. However, potential review bias necessitates a strict protocol

  13. Phenotypic transition maps of 3D breast acini obtained by imaging-guided agent-based modeling

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Jonathan; Enderling, Heiko; Becker-Weimann, Sabine; Pham, Christopher; Polyzos, Aris; Chen, Chen-Yi; Costes, Sylvain V

    2011-02-18

    We introduce an agent-based model of epithelial cell morphogenesis to explore the complex interplay between apoptosis, proliferation, and polarization. By varying the activity levels of these mechanisms we derived phenotypic transition maps of normal and aberrant morphogenesis. These maps identify homeostatic ranges and morphologic stability conditions. The agent-based model was parameterized and validated using novel high-content image analysis of mammary acini morphogenesis in vitro with focus on time-dependent cell densities, proliferation and death rates, as well as acini morphologies. Model simulations reveal apoptosis being necessary and sufficient for initiating lumen formation, but cell polarization being the pivotal mechanism for maintaining physiological epithelium morphology and acini sphericity. Furthermore, simulations highlight that acinus growth arrest in normal acini can be achieved by controlling the fraction of proliferating cells. Interestingly, our simulations reveal a synergism between polarization and apoptosis in enhancing growth arrest. After validating the model with experimental data from a normal human breast line (MCF10A), the system was challenged to predict the growth of MCF10A where AKT-1 was overexpressed, leading to reduced apoptosis. As previously reported, this led to non growth-arrested acini, with very large sizes and partially filled lumen. However, surprisingly, image analysis revealed a much lower nuclear density than observed for normal acini. The growth kinetics indicates that these acini grew faster than the cells comprising it. The in silico model could not replicate this behavior, contradicting the classic paradigm that ductal carcinoma in situ is only the result of high proliferation and low apoptosis. Our simulations suggest that overexpression of AKT-1 must also perturb cell-cell and cell-ECM communication, reminding us that extracellular context can dictate cellular behavior.

  14. Range and Image Based Modelling: a way for Frescoed Vault Texturing Optimization

    Science.gov (United States)

    Caroti, G.; Martínez-Espejo Zaragoza, I.; Piemonte, A.

    2015-02-01

    In the restoration of the frescoed vaults it is not only important to know the geometric shape of the painted surface, but it is essential to document its chromatic characterization and conservation status. The new techniques of range-based and image-based modelling, each with its limitations and advantages, offer a wide range of methods to obtain the geometric shape. In fact, several studies widely document that laser scanning enable obtaining three-dimensional models with high morphological precision. However, the quality level of the colour obtained with built-in laser scanner cameras is not comparable to that obtained for the shape. It is possible to improve the texture quality by means of a dedicated photographic campaign. This procedure, however, requires to calculate the external orientation of each image identifying the control points on it and on the model through a costly step of post processing. With image-based modelling techniques it is possible to obtain models that maintain the colour quality of the original images, but with variable geometric precision, locally lower than the laser scanning model. This paper presents a methodology that uses the camera external orientation parameters calculated by image based modelling techniques to project the same image on the model obtained from the laser scan. This methodology is tested on an Italian mirror (a schifo) frescoed vault. In the paper the different models, the analysis of precision and the efficiency evaluation of proposed methodology are presented.

  15. Lévy-based modelling in brain imaging

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Rønn-Nielsen, Anders; Mouridsen, Kim

    2013-01-01

    example of magnetic resonance imaging scans that are non-Gaussian. For these data, simulations under the fitted models show that traditional methods based on Gaussian random field theory may leave small, but significant changes in signal level undetected, while these changes are detectable under a non...

  16. Muscles of mastication model-based MR image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Ong, S.H. [National Univ. of Singapore (Singapore). Dept. of Electrical and Computer Engineering; National Univ. of Singapore (Singapore). Div. of Bioengineering; Hu, Q.; Nowinski, W.L. [Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National Univ. of Singapore (Singapore). Dept. of Preventive Dentistry; Goh, P.S. [National Univ. of Singapore (Singapore). Dept. of Diagnostic Radiology

    2006-11-15

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  17. Muscles of mastication model-based MR image segmentation

    International Nuclear Information System (INIS)

    Ng, H.P.; Agency for Science Technology and Research, Singapore; Ong, S.H.; National Univ. of Singapore; Hu, Q.; Nowinski, W.L.; Foong, K.W.C.; National Univ. of Singapore; Goh, P.S.

    2006-01-01

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  18. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  19. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  20. An three-dimensional imaging algorithm based on the radiation model of electric dipole

    International Nuclear Information System (INIS)

    Tian Bo; Zhong Weijun; Tong Chuangming

    2011-01-01

    A three-dimensional imaging algorithm based on the radiation model of dipole (DBP) is presented. On the foundation of researching the principle of the back projection (BP) algorithm, the relationship between the near field imaging model and far field imaging model is analyzed based on the scattering model. Firstly, the far field sampling data is transferred to the near field sampling data through applying the radiation theory of dipole. Then the dealt sampling data was projected to the imaging region to obtain the images of targets. The capability of the new algorithm to detect targets is verified by using finite-difference time-domain method (FDTD), and the coupling effect for imaging is analyzed. (authors)

  1. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    International Nuclear Information System (INIS)

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-01-01

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  2. [A Method to Reconstruct Surface Reflectance Spectrum from Multispectral Image Based on Canopy Radiation Transfer Model].

    Science.gov (United States)

    Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li

    2015-07-01

    Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.

  3. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  4. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  5. Development and practice for a PACS-based interactive teaching model for CT image

    International Nuclear Information System (INIS)

    Tian Junzhang; Jiang Guihua; Zheng Liyin; Wang Ling; Wenhua; Liang Lianbao

    2002-01-01

    Objective: To explore the interactive teaching model for CT imaging based on PACS, and provide the clinician and young radiologist with continued medical education. Methods: 100 M trunk net was adopted in PACS and 10 M was exchanged on desktop. Teaching model was installed in browse and diagnosis workstation. Teaching contents were classified according to region and managed according to branch model. Text data derived from authoritative textbooks, monograph, and periodicals. Imaging data derived from cases proved by pathology and clinic. The data were obtained through digital camera and scanner or from PACS. After edited and transformed into standard digital image through DICOM server, they were saved in HD of PACS image server with file form. Results: Teaching model for CT imaging provided kinds of cases of CT sign, clinic characteristics, pathology and distinguishing diagnosis. Normal section anatomy, typical image, and its notation could be browsed real time. Teaching model for CT imaging could provide reference to teaching, diagnosis and report. Conclusion: PACS-based teaching model for CT imaging could provide interactive teaching and scientific research tool and improve work quality and efficiency

  6. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2009-01-01

    Full Text Available Abstract We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  7. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Mithun Das Gupta

    2009-01-01

    Full Text Available We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  8. Model-based T{sub 2} relaxometry using undersampled magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Sumpf, Tilman

    2013-11-01

    T{sub 2} relaxometry refers to the quantitative determination of spin-spin relaxation times in magnetic resonance imaging (MRI). Particularly in clinical diagnostics, the method provides important information about tissue structures and respective pathologic alterations. Unfortunately, it also requires comparatively long measurement times which preclude widespread practical applications. To overcome such limitations, a so-called model-based reconstruction concept has recently been proposed. The method allows for the estimation of spin-density and T{sub 2} parameter maps from only a fraction of the usually required data. So far, promising results have been reported for a radial data acquisition scheme. However, due to technical reasons, radial imaging is only available on a very limited number of MRI systems. The present work deals with the realization and evaluation of different model-based T{sub 2} reconstruction methods that are applicable for the most widely available Cartesian (rectilinear) acquisition scheme. The initial implementation is based on the conventional assumption of a mono-exponential T{sub 2} signal decay. A suitable sampling scheme as well as an automatic scaling procedure are developed, which remove the necessity of manual parameter tuning. As demonstrated for human brain MRI data, the technique allows for a more than 5-fold acceleration of the underlying data acquisition. Furthermore, general limitations and specific error sources are identified and suitable simulation programs are developed for their detailed analysis. In addition to phase variations in image space, the simulations reveal truncation effects as a relevant cause of reconstruction artifacts. To reduce the latter, an alternative model formulation is developed and tested. For noise-free simulated data, the method yields an almost complete suppression of associated artifacts. Residual problems in the reconstruction of experimental MRI data point to the predominant influence of other

  9. Image Restoration Based on the Hybrid Total-Variation-Type Model

    OpenAIRE

    Shi, Baoli; Pang, Zhi-Feng; Yang, Yu-Fei

    2012-01-01

    We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two ${L}^{1}$ -norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM) to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method call...

  10. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  11. Novel Polyurethane Matrix Systems Reveal a Particular Sustained Release Behavior Studied by Imaging and Computational Modeling.

    Science.gov (United States)

    Campiñez, María Dolores; Caraballo, Isidoro; Puchkov, Maxim; Kuentz, Martin

    2017-07-01

    The aim of the present work was to better understand the drug-release mechanism from sustained release matrices prepared with two new polyurethanes, using a novel in silico formulation tool based on 3-dimensional cellular automata. For this purpose, two polymers and theophylline as model drug were used to prepare binary matrix tablets. Each formulation was simulated in silico, and its release behavior was compared to the experimental drug release profiles. Furthermore, the polymer distributions in the tablets were imaged by scanning electron microscopy (SEM) and the changes produced by the tortuosity were quantified and verified using experimental data. The obtained results showed that the polymers exhibited a surprisingly high ability for controlling drug release at low excipient concentrations (only 10% w/w of excipient controlled the release of drug during almost 8 h). The mesoscopic in silico model helped to reveal how the novel biopolymers were controlling drug release. The mechanism was found to be a special geometrical arrangement of the excipient particles, creating an almost continuous barrier surrounding the drug in a very effective way, comparable to lipid or waxy excipients but with the advantages of a much higher compactability, stability, and absence of excipient polymorphism.

  12. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  13. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  14. Cardiac magnetic source imaging based on current multipole model

    International Nuclear Information System (INIS)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  15. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....

  16. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    Science.gov (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  17. Polarimetric SAR image classification based on discriminative dictionary learning model

    Science.gov (United States)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  18. Hubble Images Reveal Jupiter's Auroras

    Science.gov (United States)

    1996-01-01

    These images, taken by the Hubble Space Telescope, reveal changes in Jupiter's auroral emissions and how small auroral spots just outside the emission rings are linked to the planet's volcanic moon, Io. The images represent the most sensitive and sharply-detailed views ever taken of Jovian auroras.The top panel pinpoints the effects of emissions from Io, which is about the size of Earth's moon. The black-and-white image on the left, taken in visible light, shows how Io and Jupiter are linked by an invisible electrical current of charged particles called a 'flux tube.' The particles - ejected from Io (the bright spot on Jupiter's right) by volcanic eruptions - flow along Jupiter's magnetic field lines, which thread through Io, to the planet's north and south magnetic poles. This image also shows the belts of clouds surrounding Jupiter as well as the Great Red Spot.The black-and-white image on the right, taken in ultraviolet light about 15 minutes later, shows Jupiter's auroral emissions at the north and south poles. Just outside these emissions are the auroral spots. Called 'footprints,' the spots are created when the particles in Io's 'flux tube' reach Jupiter's upper atmosphere and interact with hydrogen gas, making it fluoresce. In this image, Io is not observable because it is faint in the ultraviolet.The two ultraviolet images at the bottom of the picture show how the auroral emissions change in brightness and structure as Jupiter rotates. These false-color images also reveal how the magnetic field is offset from Jupiter's spin axis by 10 to 15 degrees. In the right image, the north auroral emission is rising over the left limb; the south auroral oval is beginning to set. The image on the left, obtained on a different date, shows a full view of the north aurora, with a strong emission inside the main auroral oval.The images were taken by the telescope's Wide Field and Planetary Camera 2 between May 1994 and September 1995.This image and other images and data

  19. Quantum dot-based local field imaging reveals plasmon-based interferometric logic in silver nanowire networks.

    Science.gov (United States)

    Wei, Hong; Li, Zhipeng; Tian, Xiaorui; Wang, Zhuoxian; Cong, Fengzi; Liu, Ning; Zhang, Shunping; Nordlander, Peter; Halas, Naomi J; Xu, Hongxing

    2011-02-09

    We show that the local electric field distribution of propagating plasmons along silver nanowires can be imaged by coating the nanowires with a layer of quantum dots, held off the surface of the nanowire by a nanoscale dielectric spacer layer. In simple networks of silver nanowires with two optical inputs, control of the optical polarization and phase of the input fields directs the guided waves to a specific nanowire output. The QD-luminescent images of these structures reveal that a complete family of phase-dependent, interferometric logic functions can be performed on these simple networks. These results show the potential for plasmonic waveguides to support compact interferometric logic operations.

  20. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  1. Image-based modeling of flow and reactive transport in porous media

    Science.gov (United States)

    Qin, Chao-Zhong; Hoang, Tuong; Verhoosel, Clemens V.; Harald van Brummelen, E.; Wijshoff, Herman M. A.

    2017-04-01

    Due to the availability of powerful computational resources and high-resolution acquisition of material structures, image-based modeling has become an important tool in studying pore-scale flow and transport processes in porous media [Scheibe et al., 2015]. It is also playing an important role in the upscaling study for developing macroscale porous media models. Usually, the pore structure of a porous medium is directly discretized by the voxels obtained from visualization techniques (e.g. micro CT scanning), which can avoid the complex generation of computational mesh. However, this discretization may considerably overestimate the interfacial areas between solid walls and pore spaces. As a result, it could impact the numerical predictions of reactive transport and immiscible two-phase flow. In this work, two types of image-based models are used to study single-phase flow and reactive transport in a porous medium of sintered glass beads. One model is from a well-established voxel-based simulation tool. The other is based on the mixed isogeometric finite cell method [Hoang et al., 2016], which has been implemented in the open source Nutils (http://www.nutils.org). The finite cell method can be used in combination with isogeometric analysis to enable the higher-order discretization of problems on complex volumetric domains. A particularly interesting application of this immersed simulation technique is image-based analysis, where the geometry is smoothly approximated by segmentation of a B-spline level set approximation of scan data [Verhoosel et al., 2015]. Through a number of case studies by the two models, we will show the advantages and disadvantages of each model in modeling single-phase flow and reactive transport in porous media. Particularly, we will highlight the importance of preserving high-resolution interfaces between solid walls and pore spaces in image-based modeling of porous media. References Hoang, T., C. V. Verhoosel, F. Auricchio, E. H. van

  2. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  3. Beam-hardening correction in CT based on basis image and TV model

    International Nuclear Information System (INIS)

    Li Qingliang; Yan Bin; Li Lei; Sun Hongsheng; Zhang Feng

    2012-01-01

    In X-ray computed tomography, the beam hardening leads to artifacts and reduces the image quality. It analyzes how beam hardening influences on original projection. According, it puts forward a kind of new beam-hardening correction method based on the basis images and TV model. Firstly, according to physical characteristics of the beam hardening an preliminary correction model with adjustable parameters is set up. Secondly, using different parameters, original projections are operated by the correction model. Thirdly, the projections are reconstructed to obtain a series of basis images. Finally, the linear combination of basis images is the final reconstruction image. Here, with total variation for the final reconstruction image as the cost function, the linear combination coefficients for the basis images are determined according to iterative method. To verify the effectiveness of the proposed method, the experiments are carried out on real phantom and industrial part. The results show that the algorithm significantly inhibits cup and strip artifacts in CT image. (authors)

  4. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    International Nuclear Information System (INIS)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  5. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    Science.gov (United States)

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  6. Projection model for flame chemiluminescence tomography based on lens imaging

    Science.gov (United States)

    Wan, Minggang; Zhuang, Jihui

    2018-04-01

    For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.

  7. Residual stress distribution analysis of heat treated APS TBC using image based modelling.

    Science.gov (United States)

    Li, Chun; Zhang, Xun; Chen, Ying; Carr, James; Jacques, Simon; Behnsen, Julia; di Michiel, Marco; Xiao, Ping; Cernik, Robert

    2017-08-01

    We carried out a residual stress distribution analysis in a APS TBC throughout the depth of the coatings. The samples were heat treated at 1150 °C for 190 h and the data analysis used image based modelling based on the real 3D images measured by Computed Tomography (CT). The stress distribution in several 2D slices from the 3D model is included in this paper as well as the stress distribution along several paths shown on the slices. Our analysis can explain the occurrence of the "jump" features near the interface between the top coat and the bond coat. These features in the residual stress distribution trend were measured (as a function of depth) by high-energy synchrotron XRD (as shown in our related research article entitled 'Understanding the Residual Stress Distribution through the Thickness of Atmosphere Plasma Sprayed (APS) Thermal Barrier Coatings (TBCs) by high energy Synchrotron XRD; Digital Image Correlation (DIC) and Image Based Modelling') (Li et al., 2017) [1].

  8. Structural brain abnormalities in women with subclinical depression, as revealed by voxel-based morphometry and diffusion tensor imaging.

    Science.gov (United States)

    Hayakawa, Yayoi K; Sasaki, Hiroki; Takao, Hidemasa; Mori, Harushi; Hayashi, Naoto; Kunimatsu, Akira; Aoki, Shigeki; Ohtomo, Kuni

    2013-01-25

    Brain structural changes accompany major depressive disorder, but whether subclinical depression is accompanied by similar changes in brain volume and white matter integrity is unknown. By using voxel-based morphometry (VBM) of the gray matter and tract-specific analysis based on diffusion tensor imaging (DTI) of the white matter, we explored the extent to which abnormalities could be identified in specific brain structures of healthy adults with subclinical depression. The subjects were 21 community-dwelling adults with subclinical depression, as measured by their Center for Epidemiologic Studies Depression Scale (CES-D) scores. They were not demented and had no neurological or psychiatric history. We collected brain magnetic resonance images of the patients and of 21 matched control subjects, and we used VBM to analyze the differences in regional gray matter volume between the two groups. Moreover, we examined the white matter integrity by using tract-specific analysis based on the gray matter volume changes revealed by VBM. VBM revealed that the volumes of both anterior cingulate gyri and the right rectal gyrus were smaller in subclinically depressed women than in control women. Calculation of DTI measures in the anterior cingulum bundle revealed a positive correlation between CES-D scale score and radial diffusivity in the right anterior cingulum in subclinically depressed women. The small sample size limits the stability of the reported findings. Gray matter volume reduction and white matter integrity change in specific frontal brain regions may be associated with depressive symptoms in women, even at a subclinical level. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  10. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Science.gov (United States)

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  11. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2013-07-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.

  12. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  13. Image-based Modeling of PSF Deformation with Application to Limited Angle PET Data

    Science.gov (United States)

    Matej, Samuel; Li, Yusheng; Panetta, Joseph; Karp, Joel S.; Surti, Suleman

    2016-01-01

    The point-spread-functions (PSFs) of reconstructed images can be deformed due to detector effects such as resolution blurring and parallax error, data acquisition geometry such as insufficient sampling or limited angular coverage in dual-panel PET systems, or reconstruction imperfections/simplifications. PSF deformation decreases quantitative accuracy and its spatial variation lowers consistency of lesion uptake measurement across the imaging field-of-view (FOV). This can be a significant problem with dual panel PET systems even when using TOF data and image reconstruction models of the detector and data acquisition process. To correct for the spatially variant reconstructed PSF distortions we propose to use an image-based resolution model (IRM) that includes such image PSF deformation effects. Originally the IRM was mostly used for approximating data resolution effects of standard PET systems with full angular coverage in a computationally efficient way, but recently it was also used to mitigate effects of simplified geometric projectors. Our work goes beyond this by including into the IRM reconstruction imperfections caused by combination of the limited angle, parallax errors, and any other (residual) deformation effects and testing it for challenging dual panel data with strongly asymmetric and variable PSF deformations. We applied and tested these concepts using simulated data based on our design for a dedicated breast imaging geometry (B-PET) consisting of dual-panel, time-of-flight (TOF) detectors. We compared two image-based resolution models; i) a simple spatially invariant approximation to PSF deformation, which captures only the general PSF shape through an elongated 3D Gaussian function, and ii) a spatially variant model using a Gaussian mixture model (GMM) to more accurately capture the asymmetric PSF shape in images reconstructed from data acquired with the B-PET scanner geometry. Results demonstrate that while both IRMs decrease the overall uptake

  14. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 and Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States)

    2010-05-15

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  15. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model

    International Nuclear Information System (INIS)

    Zhou, Jian; Qi, Jinyi

    2014-01-01

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D time-of-flight PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon’s ray-tracer, we propose another more simplified geometrical projector based on the Bresenham’s ray-tracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a non-factored model, while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve the optimal reconstruction performance based on a sparse factorization model with an image domain resolution model. (paper)

  16. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.

    Science.gov (United States)

    Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li

    2017-05-03

    In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.

  17. Edge detection of solid motor' CT image based on gravitation model

    International Nuclear Information System (INIS)

    Yu Guanghui; Lu Hongyi; Zhu Min; Liu Xudong; Hou Zhiqiang

    2012-01-01

    In order to detect the edge of solid motor' CT image much better, a new edge detection operator base on gravitation model was put forward. The edge of CT image is got by the new operator. The superiority turned out by comparing the edge got by ordinary operator. The comparison among operators with different size shows that higher quality CT images need smaller size operator while the lower need the larger. (authors)

  18. Image Restoration Based on the Hybrid Total-Variation-Type Model

    Directory of Open Access Journals (Sweden)

    Baoli Shi

    2012-01-01

    Full Text Available We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two L1-norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method called the proximal point method (PPM is proposed and the convergence of the proposed method is proved. Some numerical results demonstrate the viability and efficiency of the proposed model and methods.

  19. Model-based VQ for image data archival, retrieval and distribution

    Science.gov (United States)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  20. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  1. Moving object detection using dynamic motion modelling from UAV aerial images.

    Science.gov (United States)

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  2. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  3. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  4. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  5. Image-based quantification and mathematical modeling of spatial heterogeneity in ESC colonies.

    Science.gov (United States)

    Herberg, Maria; Zerjatke, Thomas; de Back, Walter; Glauche, Ingmar; Roeder, Ingo

    2015-06-01

    Pluripotent embryonic stem cells (ESCs) have the potential to differentiate into cells of all three germ layers. This unique property has been extensively studied on the intracellular, transcriptional level. However, ESCs typically form clusters of cells with distinct size and shape, and establish spatial structures that are vital for the maintenance of pluripotency. Even though it is recognized that the cells' arrangement and local interactions play a role in fate decision processes, the relations between transcriptional and spatial patterns have not yet been studied. We present a systems biology approach which combines live-cell imaging, quantitative image analysis, and multiscale, mathematical modeling of ESC growth. In particular, we develop quantitative measures of the morphology and of the spatial clustering of ESCs with different expression levels and apply them to images of both in vitro and in silico cultures. Using the same measures, we are able to compare model scenarios with different assumptions on cell-cell adhesions and intercellular feedback mechanisms directly with experimental data. Applying our methodology to microscopy images of cultured ESCs, we demonstrate that the emerging colonies are highly variable regarding both morphological and spatial fluorescence patterns. Moreover, we can show that most ESC colonies contain only one cluster of cells with high self-renewing capacity. These cells are preferentially located in the interior of a colony structure. The integrated approach combining image analysis with mathematical modeling allows us to reveal potential transcription factor related cellular and intercellular mechanisms behind the emergence of observed patterns that cannot be derived from images directly. © 2015 International Society for Advancement of Cytometry.

  6. Photometric Modeling of Simulated Surace-Resolved Bennu Images

    Science.gov (United States)

    Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.

    2017-12-01

    The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the

  7. On Signal Modeling of Moon-Based Synthetic Aperture Radar (SAR Imaging of Earth

    Directory of Open Access Journals (Sweden)

    Zhen Xu

    2018-03-01

    Full Text Available The Moon-based Synthetic Aperture Radar (Moon-Based SAR, using the Moon as a platform, has a great potential to offer global-scale coverage of the earth’s surface with a high revisit cycle and is able to meet the scientific requirements for climate change study. However, operating in the lunar orbit, Moon-Based SAR imaging is confined within a complex geometry of the Moon-Based SAR, Moon, and Earth, where both rotation and revolution have effects. The extremely long exposure time of Moon-Based SAR presents a curved moving trajectory and the protracted time-delay in propagation makes the “stop-and-go” assumption no longer valid. Consequently, the conventional SAR imaging technique is no longer valid for Moon-Based SAR. This paper develops a Moon-Based SAR theory in which a signal model is derived. The Doppler parameters in the context of lunar revolution with the removal of ‘stop-and-go’ assumption are first estimated, and then characteristics of Moon-Based SAR imaging’s azimuthal resolution are analyzed. In addition, a signal model of Moon-Based SAR and its two-dimensional (2-D spectrum are further derived. Numerical simulation using point targets validates the signal model and enables Doppler parameter estimation for image focusing.

  8. Live cell CRISPR-imaging in plants reveals dynamic telomere movements

    KAUST Repository

    Dreissig, Steven

    2017-05-16

    Elucidating the spatio-temporal organization of the genome inside the nucleus is imperative to understand the regulation of genes and non-coding sequences during development and environmental changes. Emerging techniques of chromatin imaging promise to bridge the long-standing gap between sequencing studies which reveal genomic information and imaging studies that provide spatial and temporal information of defined genomic regions. Here, we demonstrate such an imaging technique based on two orthologues of the bacterial CRISPR-Cas9 system. By fusing eGFP/mRuby2 to the catalytically inactive version of Streptococcus pyogenes and Staphylococcus aureus Cas9, we show robust visualization of telomere repeats in live leaf cells of Nicotiana benthamiana. By tracking the dynamics of telomeres visualized by CRISPR-dCas9, we reveal dynamic telomere movements of up to 2 μm within 30 minutes during interphase. Furthermore, we show that CRISPR-dCas9 can be combined with fluorescence-labelled proteins to visualize DNA-protein interactions in vivo. By simultaneously using two dCas9 orthologues, we pave the way for imaging of multiple genomic loci in live plants cells. CRISPR-imaging bears the potential to significantly improve our understanding of the dynamics of chromosomes in live plant cells.

  9. Model-Based Photoacoustic Image Reconstruction using Compressed Sensing and Smoothed L0 Norm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    Photoacoustic imaging (PAI) is a novel medical imaging modality that uses the advantages of the spatial resolution of ultrasound imaging and the high contrast of pure optical imaging. Analytical algorithms are usually employed to reconstruct the photoacoustic (PA) images as a result of their simple implementation. However, they provide a low accurate image. Model-based (MB) algorithms are used to improve the image quality and accuracy while a large number of transducers and data acquisition a...

  10. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    Science.gov (United States)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  11. An Improved Physics-Based Model for Topographic Correction of Landsat TM Images

    Directory of Open Access Journals (Sweden)

    Ainong Li

    2015-05-01

    Full Text Available Optical remotely sensed images in mountainous areas are subject to radiometric distortions induced by topographic effects, which need to be corrected before quantitative applications. Based on Li model and Sandmeier model, this paper proposed an improved physics-based model for the topographic correction of Landsat Thematic Mapper (TM images. The model employed Normalized Difference Vegetation Index (NDVI thresholds to approximately divide land targets into eleven groups, due to NDVI’s lower sensitivity to topography and its significant role in indicating land cover type. Within each group of terrestrial targets, corresponding MODIS BRDF (Bidirectional Reflectance Distribution Function products were used to account for land surface’s BRDF effect, and topographic effects are corrected without Lambertian assumption. The methodology was tested with two TM scenes of severely rugged mountain areas acquired under different sun elevation angles. Results demonstrated that reflectance of sun-averted slopes was evidently enhanced, and the overall quality of images was improved with topographic effect being effectively suppressed. Correlation coefficients between Near Infra-Red band reflectance and illumination condition reduced almost to zero, and coefficients of variance also showed some reduction. By comparison with the other two physics-based models (Sandmeier model and Li model, the proposed model showed favorable results on two tested Landsat scenes. With the almost half-century accumulation of Landsat data and the successive launch and operation of Landsat 8, the improved model in this paper can be potentially helpful for the topographic correction of Landsat and Landsat-like data.

  12. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  13. Image contrast enhancement based on a local standard deviation model

    International Nuclear Information System (INIS)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-01-01

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm

  14. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    Science.gov (United States)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  15. Model-based imaging of cardiac electrical function in human atria

    Science.gov (United States)

    Modre, Robert; Tilg, Bernhard; Fischer, Gerald; Hanser, Friedrich; Messnarz, Bernd; Schocke, Michael F. H.; Kremser, Christian; Hintringer, Florian; Roithinger, Franz

    2003-05-01

    Noninvasive imaging of electrical function in the human atria is attained by the combination of data from electrocardiographic (ECG) mapping and magnetic resonance imaging (MRI). An anatomical computer model of the individual patient is the basis for our computer-aided diagnosis of cardiac arrhythmias. Three patients suffering from Wolff-Parkinson-White syndrome, from paroxymal atrial fibrillation, and from atrial flutter underwent an electrophysiological study. After successful treatment of the cardiac arrhythmia with invasive catheter technique, pacing protocols with stimuli at several anatomical sites (coronary sinus, left and right pulmonary vein, posterior site of the right atrium, right atrial appendage) were performed. Reconstructed activation time (AT) maps were validated with catheter-based electroanatomical data, with invasively determined pacing sites, and with pacing at anatomical markers. The individual complex anatomical model of the atria of each patient in combination with a high-quality mesh optimization enables accurate AT imaging, resulting in a localization error for the estimated pacing sites within 1 cm. Our findings may have implications for imaging of atrial activity in patients with focal arrhythmias.

  16. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-01-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT-based

  17. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  18. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    Science.gov (United States)

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and

  19. Quantification of root water uptake in soil using X-ray computed tomography and image-based modelling.

    Science.gov (United States)

    Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina

    2018-01-01

    Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.

  20. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    International Nuclear Information System (INIS)

    Chen, Hsin-Chen; Yue, Yaofeng; Sun, Mingui; Jia, Wenyan; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D

    2013-01-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. (paper)

  1. Analyzer-based imaging of spinal fusion in an animal model

    International Nuclear Information System (INIS)

    Kelly, M E; Beavis, R C; Allen, L A; Fiorella, David; Schueltke, E; Juurlink, B H; Chapman, L D; Zhong, Z

    2008-01-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs

  2. Analyzer-based imaging of spinal fusion in an animal model

    Science.gov (United States)

    Kelly, M. E.; Beavis, R. C.; Fiorella, David; Schültke, E.; Allen, L. A.; Juurlink, B. H.; Zhong, Z.; Chapman, L. D.

    2008-05-01

    Analyzer-based imaging (ABI) utilizes synchrotron radiation sources to create collimated monochromatic x-rays. In addition to x-ray absorption, this technique uses refraction and scatter rejection to create images. ABI provides dramatically improved contrast over standard imaging techniques. Twenty-one adult male Wistar rats were divided into four experimental groups to undergo the following interventions: (1) non-injured control, (2) decortication alone, (3) decortication with iliac crest bone grafting and (4) decortication with iliac crest bone grafting and interspinous wiring. Surgical procedures were performed at the L5-6 level. Animals were killed at 2, 4 and 6 weeks after the intervention and the spine muscle blocks were excised. Specimens were assessed for the presence of fusion by (1) manual testing, (2) conventional absorption radiography and (3) ABI. ABI showed no evidence of bone fusion in groups 1 and 2 and showed solid or possibly solid fusion in subjects from groups 3 and 4 at 6 weeks. Metal artifacts were not present in any of the ABI images. Conventional absorption radiographs did not provide diagnostic quality imaging of either the graft material or fusion masses in any of the specimens in any of the groups. Synchrotron-based ABI represents a novel imaging technique which can be used to assess spinal fusion in a small animal model. ABI produces superior image quality when compared to conventional radiographs.

  3. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  4. Image-based modeling of tumor shrinkage in head and neck radiation therapy1

    Science.gov (United States)

    Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569

  5. Defogging of road images using gain coefficient-based trilateral filter

    Science.gov (United States)

    Singh, Dilbag; Kumar, Vijay

    2018-01-01

    Poor weather conditions are responsible for most of the road accidents year in and year out. Poor weather conditions, such as fog, degrade the visibility of objects. Thus, it becomes difficult for drivers to identify the vehicles in a foggy environment. The dark channel prior (DCP)-based defogging techniques have been found to be an efficient way to remove fog from road images. However, it produces poor results when image objects are inherently similar to airlight and no shadow is cast on them. To eliminate this problem, a modified restoration model-based DCP is developed to remove the fog from road images. The transmission map is also refined by developing a gain coefficient-based trilateral filter. Thus, the proposed technique has an ability to remove fog from road images in an effective manner. The proposed technique is compared with seven well-known defogging techniques on two benchmark foggy images datasets and five real-time foggy images. The experimental results demonstrate that the proposed approach is able to remove the different types of fog from roadside images as well as significantly improve the image's visibility. It also reveals that the restored image has little or no artifacts.

  6. MIDA: A Multimodal Imaging-Based Detailed Anatomical Model of the Human Head and Neck.

    Directory of Open Access Journals (Sweden)

    Maria Ida Iacono

    Full Text Available Computational modeling and simulations are increasingly being used to complement experimental testing for analysis of safety and efficacy of medical devices. Multiple voxel- and surface-based whole- and partial-body models have been proposed in the literature, typically with spatial resolution in the range of 1-2 mm and with 10-50 different tissue types resolved. We have developed a multimodal imaging-based detailed anatomical model of the human head and neck, named "MIDA". The model was obtained by integrating three different magnetic resonance imaging (MRI modalities, the parameters of which were tailored to enhance the signals of specific tissues: i structural T1- and T2-weighted MRIs; a specific heavily T2-weighted MRI slab with high nerve contrast optimized to enhance the structures of the ear and eye; ii magnetic resonance angiography (MRA data to image the vasculature, and iii diffusion tensor imaging (DTI to obtain information on anisotropy and fiber orientation. The unique multimodal high-resolution approach allowed resolving 153 structures, including several distinct muscles, bones and skull layers, arteries and veins, nerves, as well as salivary glands. The model offers also a detailed characterization of eyes, ears, and deep brain structures. A special automatic atlas-based segmentation procedure was adopted to include a detailed map of the nuclei of the thalamus and midbrain into the head model. The suitability of the model to simulations involving different numerical methods, discretization approaches, as well as DTI-based tensorial electrical conductivity, was examined in a case-study, in which the electric field was generated by transcranial alternating current stimulation. The voxel- and the surface-based versions of the models are freely available to the scientific community.

  7. A model-based radiography restoration method based on simple scatter-degradation scheme for improving image visibility

    Science.gov (United States)

    Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.

    2018-02-01

    In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.

  8. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    Science.gov (United States)

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.

  9. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  10. Fiducial-based fusion of 3D dental models with magnetic resonance imaging.

    Science.gov (United States)

    Abdi, Amir H; Hannam, Alan G; Fels, Sidney

    2018-04-16

    Magnetic resonance imaging (MRI) is widely used in study of maxillofacial structures. While MRI is the modality of choice for soft tissues, it fails to capture hard tissues such as bone and teeth. Virtual dental models, acquired by optical 3D scanners, are becoming more accessible for dental practice and are starting to replace the conventional dental impressions. The goal of this research is to fuse the high-resolution 3D dental models with MRI to enhance the value of imaging for applications where detailed analysis of maxillofacial structures are needed such as patient examination, surgical planning, and modeling. A subject-specific dental attachment was digitally designed and 3D printed based on the subject's face width and dental anatomy. The attachment contained 19 semi-ellipsoidal concavities in predetermined positions where oil-based ellipsoidal fiducial markers were later placed. The MRI was acquired while the subject bit on the dental attachment. The spatial position of the center of mass of each fiducial in the resultant MR Image was calculated by averaging its voxels' spatial coordinates. The rigid transformation to fuse dental models to MRI was calculated based on the least squares mapping of corresponding fiducials and solved via singular-value decomposition. The target registration error (TRE) of the proposed fusion process, calculated in a leave-one-fiducial-out fashion, was estimated at 0.49 mm. The results suggest that 6-9 fiducials suffice to achieve a TRE of equal to half the MRI voxel size. Ellipsoidal oil-based fiducials produce distinguishable intensities in MRI and can be used as registration fiducials. The achieved accuracy of the proposed approach is sufficient to leverage the merged 3D dental models with the MRI data for a finer analysis of the maxillofacial structures where complete geometry models are needed.

  11. Model-based magnetization retrieval from holographic phase images

    Energy Technology Data Exchange (ETDEWEB)

    Röder, Falk, E-mail: f.roeder@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Vogel, Karin [Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Wolf, Daniel [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Hellwig, Olav [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); AG Magnetische Funktionsmaterialien, Institut für Physik, Technische Universität Chemnitz, D-09126 Chemnitz (Germany); HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wee, Sung Hun [HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wicht, Sebastian; Rellinghaus, Bernd [IFW Dresden, Institute for Metallic Materials, P.O. Box 270116, D-01171 Dresden (Germany)

    2017-05-15

    The phase shift of the electron wave is a useful measure for the projected magnetic flux density of magnetic objects at the nanometer scale. More important for materials science, however, is the knowledge about the magnetization in a magnetic nano-structure. As demonstrated here, a dominating presence of stray fields prohibits a direct interpretation of the phase in terms of magnetization modulus and direction. We therefore present a model-based approach for retrieving the magnetization by considering the projected shape of the nano-structure and assuming a homogeneous magnetization therein. We apply this method to FePt nano-islands epitaxially grown on a SrTiO{sub 3} substrate, which indicates an inclination of their magnetization direction relative to the structural easy magnetic [001] axis. By means of this real-world example, we discuss prospects and limits of this approach. - Highlights: • Retrieval of the magnetization from holographic phase images. • Magnetostatic model constructed for a magnetic nano-structure. • Decomposition into homogeneously magnetized components. • Discretization of a each component by elementary cuboids. • Analytic solution for the phase of a magnetized cuboid considered. • Fitting a set of magnetization vectors to experimental phase images.

  12. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from the co...

  13. Aircraft Segmentation in SAR Images Based on Improved Active Shape Model

    Science.gov (United States)

    Zhang, X.; Xiong, B.; Kuang, G.

    2018-04-01

    In SAR image interpretation, aircrafts are the important targets arousing much attention. However, it is far from easy to segment an aircraft from the background completely and precisely in SAR images. Because of the complex structure, different kinds of electromagnetic scattering take place on the aircraft surfaces. As a result, aircraft targets usually appear to be inhomogeneous and disconnected. It is a good idea to extract an aircraft target by the active shape model (ASM), since combination of the geometric information controls variations of the shape during the contour evolution. However, linear dimensionality reduction, used in classic ACM, makes the model rigid. It brings much trouble to segment different types of aircrafts. Aiming at this problem, an improved ACM based on ISOMAP is proposed in this paper. ISOMAP algorithm is used to extract the shape information of the training set and make the model flexible enough to deal with different aircrafts. The experiments based on real SAR data shows that the proposed method achieves obvious improvement in accuracy.

  14. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  15. Remote sensing image ship target detection method based on visual attention model

    Science.gov (United States)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  16. Live cell imaging reveals at novel view of DNA

    International Nuclear Information System (INIS)

    Moritomi-Yano, Keiko; Yano, Ken-ichi

    2010-01-01

    Non-homologous end-joining (NHEJ) is the major repair pathway for DNA double-strand breaks (DSBs) that are the most severe form of DNA damages. Recently, live cell imaging techniques coupled with laser micro-irradiation were used to analyze the spatio-temporal behavior of the NHEJ core factors upon DSB induction in living cells. Based on the live cell imaging studies, we proposed a novel two-phase model for DSB sensing and protein assembly in the NHEJ pathway. This new model provides a novel view of the dynamic protein behavior on DSBs and broad implications for the molecular mechanism of NHEJ. (author)

  17. Using optical remote sensing model to estimate oil slick thickness based on satellite image

    International Nuclear Information System (INIS)

    Lu, Y C; Tian, Q J; Lyu, C G; Fu, W X; Han, W C

    2014-01-01

    An optical remote sensing model has been established based on two-beam interference theory to estimate marine oil slick thickness. Extinction coefficient and normalized reflectance of oil are two important parts in this model. Extinction coefficient is an important inherent optical property and will not vary with the background reflectance changed. Normalized reflectance can be used to eliminate the background differences between in situ measured spectra and remotely sensing image. Therefore, marine oil slick thickness and area can be estimated and mapped based on optical remotely sensing image and extinction coefficient

  18. A Learning State-Space Model for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.

  19. NMDA receptor antagonism by repetitive MK801 administration induces schizophrenia-like structural changes in the rat brain as revealed by voxel-based morphometry and diffusion tensor imaging.

    Science.gov (United States)

    Wu, H; Wang, X; Gao, Y; Lin, F; Song, T; Zou, Y; Xu, L; Lei, H

    2016-05-13

    Animal models of N-methyl-d-aspartate receptor (NMDAR) antagonism have been widely used for schizophrenia research. Less is known whether these models are associated with macroscopic brain structural changes that resemble those in clinical schizophrenia. Magnetic resonance imaging (MRI) was used to measure brain structural changes in rats subjected to repeated administration of MK801 in a regimen (daily dose of 0.2mg/kg for 14 consecutive days) known to be able to induce schizophrenia-like cognitive impairments. Voxel-based morphometry (VBM) revealed significant gray matter (GM) atrophy in the hippocampus, ventral striatum (vStr) and cortex. Diffusion tensor imaging (DTI) demonstrated microstructural impairments in the corpus callosum (cc). Histopathological results corroborated the MRI findings. Treatment-induced behavioral abnormalities were not measured such that correlation between the brain structural changes observed and schizophrenia-like behaviors could not be established. Chronic MK801 administration induces MRI-observable brain structural changes that are comparable to those observed in schizophrenia patients, supporting the notion that NMDAR hypofunction contributes to the pathology of schizophrenia. Imaging-derived brain structural changes in animal models of NMDAR antagonism may be useful measurements for studying the effects of treatments and interventions targeting schizophrenia. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. An automatic image-based modelling method applied to forensic infography.

    Directory of Open Access Journals (Sweden)

    Sandra Zancajo-Blazquez

    Full Text Available This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet and image (visible, infrared, thermal, etc.; (ii automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  1. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  2. Measurable realistic image-based 3D mapping

    Science.gov (United States)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  3. USE OF IMAGE BASED MODELLING FOR DOCUMENTATION OF INTRICATELY SHAPED OBJECTS

    Directory of Open Access Journals (Sweden)

    M. Marčiš

    2016-06-01

    Full Text Available In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  4. Use of Image Based Modelling for Documentation of Intricately Shaped Objects

    Science.gov (United States)

    Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.

    2016-06-01

    In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  5. NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

    Directory of Open Access Journals (Sweden)

    Hiren Mewada

    2010-08-01

    Full Text Available Because of cross-disciplinary nature, Active Contour modeling techniques have been utilized extensively for the image segmentation. In traditional active contour based segmentation techniques based on level set methods, the energy functions are defined based on the intensity gradient. This makes them highly sensitive to the situation where the underlying image content is characterized by image nonhomogeneities due to illumination and contrast condition. This is the most difficult problem to make them as fully automatic image segmentation techniques. This paper introduces one of the approaches based on image enhancement to this problem. The enhanced image is obtained using NonSubsampled Contourlet Transform, which improves the edges strengths in the direction where the illumination is not proper and then active contour model based on level set technique is utilized to segment the object. Experiment results demonstrate that proposed method can be utilized along with existing active contour model based segmentation method under situation characterized by intensity non-homogeneity to make them fully automatic.

  6. Historical Single Image-Based Modeling: The Case of Gobierna Tower, Zamora (Spain

    Directory of Open Access Journals (Sweden)

    Jesús Garcia-Gago

    2014-01-01

    Full Text Available Historical perspective images have been proved to be very useful to properly provide a dimensional analysis of buildings façades or even to generate a pseudo-3D reconstruction based on rectified images of the whole structure. In this paper, the case of Gobierna Tower (Zamora, Spain is analyzed from a historical single image-based modeling approach. In particular, a bottom-up approach, which takes advantage from the perspective of the image, the existence of the three vanishing points and the usual geometric constraints (i.e., planarity, orthogonality, and parallelism is applied for the dimensional analysis of a destroyed historical building. Results were compared with ground truth measurements existing in a historical topographical surveying obtaining deviations of about 1%.

  7. Deformation Measurements of Gabion Walls Using Image Based Modeling

    Directory of Open Access Journals (Sweden)

    Marek Fraštia

    2014-06-01

    Full Text Available The image based modeling finds use in applications where it is necessary to reconstructthe 3D surface of the observed object with a high level of detail. Previous experiments showrelatively high variability of the results depending on the camera type used, the processingsoftware, or the process evaluation. The authors tested the method of SFM (Structure fromMotion to determine the stability of gabion walls. The results of photogrammetricmeasurements were compared to precise geodetic point measurements.

  8. Constructing a Computer Model of the Human Eye Based on Tissue Slice Images

    OpenAIRE

    Dai, Peishan; Wang, Boliang; Bao, Chunbo; Ju, Ying

    2010-01-01

    Computer simulation of the biomechanical and biological heat transfer in ophthalmology greatly relies on having a reliable computer model of the human eye. This paper proposes a novel method on the construction of a geometric model of the human eye based on tissue slice images. Slice images were obtained from an in vitro Chinese human eye through an embryo specimen processing methods. A level set algorithm was used to extract contour points of eye tissues while a principle component analysi...

  9. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  10. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  11. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  12. Sketch-based 3D modeling by aligning outlines of an image

    Directory of Open Access Journals (Sweden)

    Chunxiao Li

    2016-07-01

    Full Text Available In this paper we present an efficient technique for sketch-based 3D modeling using automatically extracted image features. Creating a 3D model often requires a drawing of irregular shapes composed of curved lines as a starting point but it is difficult to hand-draw such lines without introducing awkward bumps and edges along the lines. We propose an automatic alignment of a user׳s hand-drawn sketch lines to the contour lines of an image, facilitating a considerable level of ease with which the user can carelessly continue sketching while the system intelligently snaps the sketch lines to a background image contour, no longer requiring the strenuous effort and stress of trying to make a perfect line during the modeling task. This interactive technique seamlessly combines the efficiency and perception of the human user with the accuracy of computational power, applied to the domain of 3D modeling where the utmost precision of on-screen drawing has been one of the hurdles of the task hitherto considered a job requiring a highly skilled and careful manipulation by the user. We provide several examples to demonstrate the accuracy and efficiency of the method with which complex shapes were achieved easily and quickly in the interactive outline drawing task.

  13. Image-Based Models for Specularity Propagation in Diminished Reality.

    Science.gov (United States)

    Said, Souheil Hadj; Tamaazousti, Mohamed; Bartoli, Adrien

    2018-07-01

    The aim of Diminished Reality (DR) is to remove a target object in a live video stream seamlessly. In our approach, the area of the target object is replaced with new texture that blends with the rest of the image. The result is then propagated to the next frames of the video. One of the important stages of this technique is to update the target region with respect to the illumination change. This is a complex and recurrent problem when the viewpoint changes. We show that the state-of-the-art in DR fails in solving this problem, even under simple scenarios. We then use local illumination models to address this problem. According to these models, the variation in illumination only affects the specular component of the image. In the context of DR, the problem is therefore solved by propagating the specularities in the target area. We list a set of structural properties of specularities which we incorporate in two new models for specularity propagation. Our first model includes the same property as the previous approaches, which is the smoothness of illumination variation, but has a different estimation method based on the Thin-Plate Spline. Our second model incorporates more properties of the specularity's shape on planar surfaces. Experimental results on synthetic and real data show that our strategy substantially improves the rendering quality compared to the state-of-the-art in DR.

  14. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  15. Image-Based Modeling Reveals Dynamic Redistribution of DNA Damageinto Nuclear Sub-Domains

    Energy Technology Data Exchange (ETDEWEB)

    Costes Sylvain V., Ponomarev Artem, Chen James L.; Nguyen, David; Cucinotta, Francis A.; Barcellos-Hoff, Mary Helen

    2007-08-03

    Several proteins involved in the response to DNA doublestrand breaks (DSB) f orm microscopically visible nuclear domains, orfoci, after exposure to ionizing radiation. Radiation-induced foci (RIF)are believed to be located where DNA damage occurs. To test thisassumption, we analyzed the spatial distribution of 53BP1, phosphorylatedATM, and gammaH2AX RIF in cells irradiated with high linear energytransfer (LET) radiation and low LET. Since energy is randomly depositedalong high-LET particle paths, RIF along these paths should also berandomly distributed. The probability to induce DSB can be derived fromDNA fragment data measured experimentally by pulsed-field gelelectrophoresis. We used this probability in Monte Carlo simulations topredict DSB locations in synthetic nuclei geometrically described by acomplete set of human chromosomes, taking into account microscope opticsfrom real experiments. As expected, simulations produced DNA-weightedrandom (Poisson) distributions. In contrast, the distributions of RIFobtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) werenon-random. This deviation from the expected DNA-weighted random patterncan be further characterized by "relative DNA image measurements." Thisnovel imaging approach shows that RIF were located preferentially at theinterface between high and low DNA density regions, and were morefrequent than predicted in regions with lower DNA density. The samepreferential nuclear location was also measured for RIF induced by 1 Gyof low-LET radiation. This deviation from random behavior was evidentonly 5 min after irradiation for phosphorylated ATM RIF, while gammaH2AXand 53BP1 RIF showed pronounced deviations up to 30 min after exposure.These data suggest that DNA damage induced foci are restricted to certainregions of the nucleus of human epithelial cells. It is possible that DNAlesions are collected in these nuclear sub-domains for more efficientrepair.

  16. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    Science.gov (United States)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  17. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  18. Relationship model among sport event image, destination image, and tourist satisfaction of Tour de Singkarak in West Sumatera

    Directory of Open Access Journals (Sweden)

    Ratni Prima Lita

    2015-06-01

    Full Text Available Sport events Tour de Singkarak (TDS can increase tourist arrivals to West Sumatera. At least at the time of execution, the majority of participants and team supporters (sports tourist brings the families. Although there are claims about the arrival of tourists, it requires to see the impact of sports events TDS and comprehensive long-term basis to the West Sumatera image as a tourist destination (destination image and its impact on tourist satisfaction. This study re-conceptualizes the interconnec-tedness among sport event image, tourist destination image, perception and the effect on tourists satisfaction. The investigation on this interconnection is expected to reveal empirically tested model. As an explanatory in nature, this study uses explanatory survey and cross sectional data. In total of 100 spectators of Tour de Singkarak in West Sumatera, they got involved in survey and they were taken by convenience sam-pling technique. Analysis of this data was done by using variance based structural equation modeling. It was found that sport event image and destination image signifi-cantly affect the satisfaction of spectators of Tour de Singkarak.

  19. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  20. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  1. Image processor of model-based vision system for assembly robots

    International Nuclear Information System (INIS)

    Moribe, H.; Nakano, M.; Kuno, T.; Hasegawa, J.

    1987-01-01

    A special purpose image preprocessor for the visual system of assembly robots has been developed. The main function unit is composed of lookup tables to utilize the advantage of semiconductor memory for large scale integration, high speed and low price. More than one unit may be operated in parallel since it is designed on the standard IEEE 796 bus. The operation time of the preprocessor in line segment extraction is usually 200 ms per 500 segments, though it differs according to the complexity of scene image. The gray-scale visual system supported by the model-based analysis program using the extracted line segments recognizes partially visible or overlapping industrial workpieces, and detects these locations and orientations

  2. Model-based crosstalk compensation for simultaneous 99mTc/123I dual-isotope brain SPECT imaging.

    Science.gov (United States)

    Du, Yong; Tsui, Benjamin M W; Frey, Eric C

    2007-09-01

    In this work, we developed a model-based method to estimate and compensate for the crosstalk contamination in simultaneous 123I and 99mTc dual isotope brain single photo emission computed tomography imaging. The model-based crosstalk compensation (MBCC) includes detailed modeling of photon interactions inside both the object and the detector system. In the method, scatter in the object is modeled using the effective source scatter estimation technique, including contributions from all the photon emissions. The effects of the collimator-detector response, including the penetration and scatter components due to high-energy 123I photons, are modeled using precalculated tables of Monte Carlo simulated point-source response functions obtained from sources in air at various distances from the face of the collimator. The model-based crosstalk estimation method was combined with iterative reconstruction based compensation to reduce contamination due to crosstalk. The MBCC method was evaluated using Monte Carlo simulated and physical phantom experimentally acquired simultaneous dual-isotope data. Results showed that, for both experimental and simulation studies, the model-based method provided crosstalk estimates that were in good agreement with the true crosstalk. Compensation using MBCC improved image contrast and removed the artifacts for both Monte Carlo simulated and experimentally acquired data. The results were in good agreement with images acquired without any crosstalk contamination.

  3. Model-based crosstalk compensation for simultaneous Tc99m∕I123 dual-isotope brain SPECT imaging.

    Science.gov (United States)

    Du, Yong; Tsui, Benjamin M W; Frey, Eric C

    2007-09-01

    In this work, we developed a model-based method to estimate and compensate for the crosstalk contamination in simultaneous I123 and Tc99m dual isotope brain single photo emission computed tomography imaging. The model-based crosstalk compensation (MBCC) includes detailed modeling of photon interactions inside both the object and the detector system. In the method, scatter in the object is modeled using the effective source scatter estimation technique, including contributions from all the photon emissions. The effects of the collimator-detector response, including the penetration and scatter components due to high-energy I123 photons, are modeled using pre-calculated tables of Monte Carlo simulated point-source response functions obtained from sources in air at various distances from the face of the collimator. The model-based crosstalk estimation method was combined with iterative reconstruction based compensation to reduce contamination due to crosstalk. The MBCC method was evaluated using Monte Carlo simulated and physical phantom experimentally acquired simultaneous dual-isotope data. Results showed that, for both experimental and simulation studies, the model-based method provided crosstalk estimates that were in good agreement with the true crosstalk. Compensation using MBCC improved image contrast and removed the artifacts for both Monte Carlo simulated and experimentally acquired data. The results were in good agreement with images acquired without any crosstalk contamination. © 2007 American Association of Physicists in Medicine.

  4. Reconstructed image of human heart for total artificial heart implantation, based on MR image and cast silicone model of heart

    International Nuclear Information System (INIS)

    Komoda, Takashi; Maeta, Hajime; Uyama, Chikao.

    1991-01-01

    Based on transverse (TRN) and LV long axis (LAX) MR images of two cadaver hearts, three-dimensional (3-D) computer models of the connecting interface between remaining heart and total artificial heart, i.e., mitral and tricuspid valvular annuli (MVA and TVA), ascending aorta (Ao) and pulmonary artery (PA), were reconstructed to compare the shape and the size of MVA and those of TVA, the distance between the center of MVA and TVA (D G ), the angle between the plane of MVA and that of TVA (R T ), and the angles of Ao and PA, respectively, to the plane of MVA (R A , R P ), with those obtained in cast silicone models. It was found that based on LAX rather than TRN MR image, MVA and TVA might be more precisely reconstructed. The data obtained in 3-D images of MVA, TVA, Ao and PA based on silicone models of 32 hearts were as follows: D G (cm): 4.17±0.43, R T (degrees): 22.1±11.3, R A (degrees): 54.9±15.3, R P (degrees): 30.8±17.1. (author)

  5. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    Science.gov (United States)

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  6. Low contrast detectability and spatial resolution with model-based iterative reconstructions of MDCT images: a phantom and cadaveric study

    Energy Technology Data Exchange (ETDEWEB)

    Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)

    2017-03-15

    To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)

  7. Feature-based Alignment of Volumetric Multi-modal Images

    Science.gov (United States)

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  8. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  9. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  10. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    Science.gov (United States)

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  11. Active vision and image/video understanding with decision structures based on the network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  12. Novel personalized pathway-based metabolomics models reveal key metabolic pathways for breast cancer diagnosis

    DEFF Research Database (Denmark)

    Huang, Sijia; Chong, Nicole; Lewis, Nathan

    2016-01-01

    diagnosis. We applied this method to predict breast cancer occurrence, in combination with correlation feature selection (CFS) and classification methods. Results: The resulting all-stage and early-stage diagnosis models are highly accurate in two sets of testing blood samples, with average AUCs (Area Under.......993. Moreover, important metabolic pathways, such as taurine and hypotaurine metabolism and the alanine, aspartate, and glutamate pathway, are revealed as critical biological pathways for early diagnosis of breast cancer. Conclusions: We have successfully developed a new type of pathway-based model to study...... metabolomics data for disease diagnosis. Applying this method to blood-based breast cancer metabolomics data, we have discovered crucial metabolic pathway signatures for breast cancer diagnosis, especially early diagnosis. Further, this modeling approach may be generalized to other omics data types for disease...

  13. Application of Finite Element Modeling Methods in Magnetic Resonance Imaging-Based Research and Clinical Management

    Science.gov (United States)

    Fwu, Peter Tramyeon

    The medical image is very complex by its nature. Modeling built upon the medical image is challenging due to the lack of analytical solution. Finite element method (FEM) is a numerical technique which can be used to solve the partial differential equations. It utilized the transformation from a continuous domain into solvable discrete sub-domains. In three-dimensional space, FEM has the capability dealing with complicated structure and heterogeneous interior. That makes FEM an ideal tool to approach the medical-image based modeling problems. In this study, I will address the three modeling in (1) photon transport inside the human breast by implanting the radiative transfer equation to simulate the diffuse optical spectroscopy imaging (DOSI) in order to measurement the percent density (PD), which has been proven as a cancer risk factor in mammography. Our goal is to use MRI as the ground truth to optimize the DOSI scanning protocol to get a consistent measurement of PD. Our result shows DOSI measurement is position and depth dependent and proper scanning scheme and body configuration are needed; (2) heat flow in the prostate by implementing the Penne's bioheat equation to evaluate the cooling performance of regional hypothermia during the robot assisted radical prostatectomy for the individual patient in order to achieve the optimal cooling setting. Four factors are taken into account during the simulation: blood abundance, artery perfusion, cooling balloon temperature, and the anatomical distance. The result shows that blood abundance, prostate size, and anatomical distance are significant factors to the equilibrium temperature of neurovascular bundle; (3) shape analysis in hippocampus by using the radial distance mapping, and two registration methods to find the correlation between sub-regional change to the age and cognition performance, which might not reveal in the volumetric analysis. The result gives a fundamental knowledge of normal distribution in young

  14. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  15. Image Structure-Preserving Denoising Based on Difference Curvature Driven Fractional Nonlinear Diffusion

    Directory of Open Access Journals (Sweden)

    Xuehui Yin

    2015-01-01

    Full Text Available The traditional integer-order partial differential equations and gradient regularization based image denoising techniques often suffer from staircase effect, speckle artifacts, and the loss of image contrast and texture details. To address these issues, in this paper, a difference curvature driven fractional anisotropic diffusion for image noise removal is presented, which uses two new techniques, fractional calculus and difference curvature, to describe the intensity variations in images. The fractional-order derivatives information of an image can deal well with the textures of the image and achieve a good tradeoff between eliminating speckle artifacts and restraining staircase effect. The difference curvature constructed by the second order derivatives along the direction of gradient of an image and perpendicular to the gradient can effectively distinguish between ramps and edges. Fourier transform technique is also proposed to compute the fractional-order derivative. Experimental results demonstrate that the proposed denoising model can avoid speckle artifacts and staircase effect and preserve important features such as curvy edges, straight edges, ramps, corners, and textures. They are obviously superior to those of traditional integral based methods. The experimental results also reveal that our proposed model yields a good visual effect and better values of MSSIM and PSNR.

  16. The model of illumination-transillumination for image enhancement of X-ray images

    Energy Technology Data Exchange (ETDEWEB)

    Lyu, Kwang Yeul [Shingu College, Sungnam (Korea, Republic of); Rhee, Sang Min [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2001-06-01

    In digital image processing, the homomorphic filtering approach is derived from an illumination - reflectance model of the image. It can also be used with an illumination-transillumination model X-ray film. Several X-ray images were applied to enhancement with histogram equalization and homomorphic filter based on an illumination-transillumination model. The homomorphic filter has proven theoretical claim of image density range compression and balanced contrast enhancement, and also was found a valuable tool to process analog X-ray images to digital images.

  17. Empirical study of travel mode forecasting improvement for the combined revealed preference/stated preference data–based discrete choice model

    Directory of Open Access Journals (Sweden)

    Yanfu Qiao

    2016-01-01

    Full Text Available The combined revealed preference/stated preference data–based discrete choice model has provided the actual choice-making restraints as well as reduced the prediction errors. But the random error variance of alternatives belonging to different data would impact its universality. In this article, we studied the traffic corridor between Chengdu and Longquan with the revealed preference/stated preference joint model, and the single stated preference data model separately predicted the choice probability of each mode. We found the revealed preference/stated preference joint model is universal only when there is a significant difference between the random error terms in different data. The single stated preference data would amplify the travelers’ preference and cause prediction error. We proposed a universal way that uses revealed preference data to modify the single stated preference data parameter estimation results to achieve the composite utility and reduce the prediction error. And the result suggests that prediction results are more reasonable based on the composite utility than the results based on the single stated preference data, especially forecasting the mode share of bus. The future metro line will be the main travel mode in this corridor, and 45% of passenger flow will transfer to the metro.

  18. Multiscale image analysis reveals structural heterogeneity of the cell microenvironment in homotypic spheroids.

    Science.gov (United States)

    Schmitz, Alexander; Fischer, Sabine C; Mattheyer, Christian; Pampaloni, Francesco; Stelzer, Ernst H K

    2017-03-03

    Three-dimensional multicellular aggregates such as spheroids provide reliable in vitro substitutes for tissues. Quantitative characterization of spheroids at the cellular level is fundamental. We present the first pipeline that provides three-dimensional, high-quality images of intact spheroids at cellular resolution and a comprehensive image analysis that completes traditional image segmentation by algorithms from other fields. The pipeline combines light sheet-based fluorescence microscopy of optically cleared spheroids with automated nuclei segmentation (F score: 0.88) and concepts from graph analysis and computational topology. Incorporating cell graphs and alpha shapes provided more than 30 features of individual nuclei, the cellular neighborhood and the spheroid morphology. The application of our pipeline to a set of breast carcinoma spheroids revealed two concentric layers of different cell density for more than 30,000 cells. The thickness of the outer cell layer depends on a spheroid's size and varies between 50% and 75% of its radius. In differently-sized spheroids, we detected patches of different cell densities ranging from 5 × 10 5 to 1 × 10 6  cells/mm 3 . Since cell density affects cell behavior in tissues, structural heterogeneities need to be incorporated into existing models. Our image analysis pipeline provides a multiscale approach to obtain the relevant data for a system-level understanding of tissue architecture.

  19. A Waterline Extraction Method from Remote Sensing Image Based on Quad-tree and Multiple Active Contour Model

    Directory of Open Access Journals (Sweden)

    YU Jintao

    2016-09-01

    Full Text Available After the characteristics of geodesic active contour model (GAC, Chan-Vese model(CV and local binary fitting model(LBF are analyzed, and the active contour model based on regions and edges is combined with image segmentation method based on quad-tree, a waterline extraction method based on quad-tree and multiple active contour model is proposed in this paper. Firstly, the method provides an initial contour according to quad-tree segmentation. Secondly, a new signed pressure force(SPF function based on global image statistics information of CV model and local image statistics information of LBF model has been defined, and then ,the edge stopping function(ESF is replaced by the proposed SPF function, which solves the problem such as evolution stopped in advance and excessive evolution. Finally, the selective binary and Gaussian filtering level set method is used to avoid reinitializing and regularization to improve the evolution efficiency. The experimental results show that this method can effectively extract the weak edges and serious concave edges, and owns some properties such as sub-pixel accuracy, high efficiency and reliability for waterline extraction.

  20. Model-based respiratory motion compensation for emission tomography image reconstruction

    International Nuclear Information System (INIS)

    Reyes, M; Malandain, G; Koulibaly, P M; Gonzalez-Ballester, M A; Darcourt, J

    2007-01-01

    In emission tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations, imprecise diagnosis, impairing of fusion with other modalities, etc. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested, which lead to improvements over the spatial activity distribution in lungs lesions, but which have the disadvantages of requiring additional instrumentation or the need of discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion compensation directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the maximum likelihood expectation maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data

  1. Reconstruction and simplification of urban scene models based on oblique images

    Science.gov (United States)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  2. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    Science.gov (United States)

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  3. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded......, the images have been previously processed according to the Graph Based Visual Saliency model in order to keep either SIFT or SURF features corresponding to the actual monuments while the background “noise” is minimized. The application is then able to classify these images, helping the user to better...

  4. Image-optimized Coronal Magnetic Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov [NASA Goddard Space Flight Center, Code 670, Greenbelt, MD 20771 (United States)

    2017-08-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.

  5. Image-Optimized Coronal Magnetic Field Models

    Science.gov (United States)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-01-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.

  6. Image-based compound profiling reveals a dual inhibitor of tyrosine kinase and microtubule polymerization.

    Science.gov (United States)

    Tanabe, Kenji

    2016-04-27

    Small-molecule compounds are widely used as biological research tools and therapeutic drugs. Therefore, uncovering novel targets of these compounds should provide insights that are valuable in both basic and clinical studies. I developed a method for image-based compound profiling by quantitating the effects of compounds on signal transduction and vesicle trafficking of epidermal growth factor receptor (EGFR). Using six signal transduction molecules and two markers of vesicle trafficking, 570 image features were obtained and subjected to multivariate analysis. Fourteen compounds that affected EGFR or its pathways were classified into four clusters, based on their phenotypic features. Surprisingly, one EGFR inhibitor (CAS 879127-07-8) was classified into the same cluster as nocodazole, a microtubule depolymerizer. In fact, this compound directly depolymerized microtubules. These results indicate that CAS 879127-07-8 could be used as a chemical probe to investigate both the EGFR pathway and microtubule dynamics. The image-based multivariate analysis developed herein has potential as a powerful tool for discovering unexpected drug properties.

  7. Nephrus: expert system model in intelligent multilayers for evaluation of urinary system based on scintigraphic image analysis

    International Nuclear Information System (INIS)

    Silva, Jorge Wagner Esteves da; Schirru, Roberto; Boasquevisque, Edson Mendes

    1999-01-01

    Renal function can be measured noninvasively with radionuclides in a extremely safe way compared to other diagnosis techniques. Nevertheless, due to the fact that radioactive materials are used in this procedure, it is necessary to maximize its benefits, therefore all efforts are justifiable in the development of data analysis support tools for this diagnosis modality. The objective of this work is to develop a prototype for a system model based on Artificial Intelligence devices able to perform functions related to cintilographic image analysis of the urinary system. Rules used by medical experts in the analysis of images obtained with 99m Tc+DTPA and /or 99m Tc+DMSA were modeled and a Neural Network diagnosis technique was implemented. Special attention was given for designing programs user-interface. Human Factor Engineering techniques were taking in account allowing friendliness and robustness. The image segmentation adopts a model based on Ideal ROIs, which represent the normal anatomic concept for urinary system organs. Results obtained using Artificial Neural Networks for qualitative image analysis and knowledge model constructed show the feasibility of Artificial Neural Networks for qualitative image analysis and knowledge model constructed show feasibility of Artificial Intelligence implementation that uses inherent abilities of each technique in the medical diagnosis image analysis. (author)

  8. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  9. WE-G-207-06: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Physical Phantom and Clinical Patient Images

    International Nuclear Information System (INIS)

    Dhou, S; Cai, W; Hurwitz, M; Rottmann, J; Myronakis, M; Cifter, F; Berbeco, R; Lewis, J; Williams, C; Mishra, P; Ionascu, D

    2015-01-01

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential

  10. Fourier-Mellin moment-based intertwining map for image encryption

    Science.gov (United States)

    Kaur, Manjit; Kumar, Vijay

    2018-03-01

    In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.

  11. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    Science.gov (United States)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  12. Application of REVEAL-W to risk-based configuration control

    International Nuclear Information System (INIS)

    Dezfuli, H.; Meyer, J.; Modarres, M.

    1994-01-01

    Over the past two years, the concept of risk-based configuration control has been introduced to the US Nuclear Regulatory Commission and the nuclear industry. Converting much of the current, deterministically based regulation of nuclear power plants to risk-based regulation can result in lower levels of risk while relieving unnecessary burdens on power plant operators and regulatory staff. To achieve the potential benefits of risk-based configuration control, the risk models developed for nuclear power plants should be (1) flexible enough to effectively support necessary risk calculations, and (2) transparent enough to encourage their use by all parties. To address these needs, SCIENTECH, Inc., has developed the PC-based REVEAL W (formerly known as SMART). This graphic-oriented and user-friendly application software allows the user to develop transparent complex logic models based on the concept of the master plant logic diagram. The logic model is success-oriented and compact. The analytical capability built into REVEAL W is generic, so the software can support different types of risk-based evaluations, such as probabilistic safety assessment, accident sequence precursor analysis, design evaluation and configuration management. In this paper, we focus on the application of REVEAL W to support risk-based configuration control of nuclear power plants. (author)

  13. Spiking cortical model-based nonlocal means method for speckle reduction in optical coherence tomography images

    Science.gov (United States)

    Zhang, Xuming; Li, Liu; Zhu, Fei; Hou, Wenguang; Chen, Xinjian

    2014-06-01

    Optical coherence tomography (OCT) images are usually degraded by significant speckle noise, which will strongly hamper their quantitative analysis. However, speckle noise reduction in OCT images is particularly challenging because of the difficulty in differentiating between noise and the information components of the speckle pattern. To address this problem, the spiking cortical model (SCM)-based nonlocal means method is presented. The proposed method explores self-similarities of OCT images based on rotation-invariant features of image patches extracted by SCM and then restores the speckled images by averaging the similar patches. This method can provide sufficient speckle reduction while preserving image details very well due to its effectiveness in finding reliable similar patches under high speckle noise contamination. When applied to the retinal OCT image, this method provides signal-to-noise ratio improvements of >16 dB with a small 5.4% loss of similarity.

  14. Unified modeling language and design of a case-based retrieval system in medical imaging.

    Science.gov (United States)

    LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.

  15. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  16. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    International Nuclear Information System (INIS)

    Dolly, S; Mutic, S; Anastasio, M; Li, H; Yu, L

    2016-01-01

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework was developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation

  17. GPU based Monte Carlo for PET image reconstruction: detector modeling

    International Nuclear Information System (INIS)

    Légrády; Cserkaszky, Á; Lantos, J.; Patay, G.; Bükki, T.

    2011-01-01

    Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)

  18. Task-Based Modeling of a 5k Ultra-High-Resolution Medical Imaging System for Digital Breast Tomosynthesis.

    Science.gov (United States)

    Zhao, Chumin; Kanicki, Jerzy

    2017-09-01

    High-resolution, low-noise X-ray detectors based on CMOS active pixel sensor (APS) technology have demonstrated superior imaging performance for digital breast tomosynthesis (DBT). This paper presents a task-based model for a high-resolution medical imaging system to evaluate its ability to detect simulated microcalcifications and masses as lesions for breast cancer. A 3-D cascaded system analysis for a 50- [Formula: see text] pixel pitch CMOS APS X-ray detector was integrated with an object task function, a medical imaging display model, and the human eye contrast sensitivity function to calculate the detectability index and area under the ROC curve (AUC). It was demonstrated that the display pixel pitch and zoom factor should be optimized to improve the AUC for detecting small microcalcifications. In addition, detector electronic noise of smaller than 300 e - and a high display maximum luminance (>1000 cd/cm 2 ) are desirable to distinguish microcalcifications of [Formula: see text] in size. For low contrast mass detection, a medical imaging display with a minimum of 12-bit gray levels is recommended to realize accurate luminance levels. A wide projection angle range of greater than ±30° in combination with the image gray level magnification could improve the mass detectability especially when the anatomical background noise is high. On the other hand, a narrower projection angle range below ±20° can improve the small, high contrast object detection. Due to the low mass contrast and luminance, the ambient luminance should be controlled below 5 cd/ [Formula: see text]. Task-based modeling provides important firsthand imaging performance of the high-resolution CMOS-based medical imaging system that is still at early stage development for DBT. The modeling results could guide the prototype design and clinical studies in the future.

  19. Pc-Based Floating Point Imaging Workstation

    Science.gov (United States)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  20. Featured Image: Revealing Hidden Objects with Color

    Science.gov (United States)

    Kohler, Susanna

    2018-02-01

    Stunning color astronomical images can often be the motivation for astronomers to continue slogging through countless data files, calculations, and simulations as we seek to understand the mysteries of the universe. But sometimes the stunning images can, themselves, be the source of scientific discovery. This is the case with the below image of Lynds Dark Nebula 673, located in the Aquila constellation, that was captured with the Mayall 4-meter telescope at Kitt Peak National Observatory by a team of scientists led by Travis Rector (University of Alaska Anchorage). After creating the image with a novel color-composite imaging method that reveals faint H emission (visible in red in both images here), Rector and collaborators identified the presence of a dozen new Herbig-Haro objects small cloud patches that are caused when material is energetically flung out from newly born stars. The image adapted above shows three of the new objects, HH 118789, aligned with two previously known objects, HH 32 and 332 suggesting they are driven by the same source. For more beautiful images and insight into the authors discoveries, check out the article linked below!Full view of Lynds Dark Nebula 673. Click for the larger view this beautiful composite image deserves! [T.A. Rector (University of Alaska Anchorage) and H. Schweiker (WIYN and NOAO/AURA/NSF)]CitationT. A. Rector et al 2018 ApJ 852 13. doi:10.3847/1538-4357/aa9ce1

  1. Quantitative imaging reveals heterogeneous growth dynamics and treatment-dependent residual tumor distributions in a three-dimensional ovarian cancer model

    Science.gov (United States)

    Celli, Jonathan P.; Rizvi, Imran; Evans, Conor L.; Abu-Yousif, Adnan O.; Hasan, Tayyaba

    2010-09-01

    Three-dimensional tumor models have emerged as valuable in vitro research tools, though the power of such systems as quantitative reporters of tumor growth and treatment response has not been adequately explored. We introduce an approach combining a 3-D model of disseminated ovarian cancer with high-throughput processing of image data for quantification of growth characteristics and cytotoxic response. We developed custom MATLAB routines to analyze longitudinally acquired dark-field microscopy images containing thousands of 3-D nodules. These data reveal a reproducible bimodal log-normal size distribution. Growth behavior is driven by migration and assembly, causing an exponential decay in spatial density concomitant with increasing mean size. At day 10, cultures are treated with either carboplatin or photodynamic therapy (PDT). We quantify size-dependent cytotoxic response for each treatment on a nodule by nodule basis using automated segmentation combined with ratiometric batch-processing of calcein and ethidium bromide fluorescence intensity data (indicating live and dead cells, respectively). Both treatments reduce viability, though carboplatin leaves micronodules largely structurally intact with a size distribution similar to untreated cultures. In contrast, PDT treatment disrupts micronodular structure, causing punctate regions of toxicity, shifting the distribution toward smaller sizes, and potentially increasing vulnerability to subsequent chemotherapeutic treatment.

  2. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  3. A classification model of Hyperion image base on SAM combined decision tree

    Science.gov (United States)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model

  4. A 4D global respiratory motion model of the thorax based on CT images: A proof of concept.

    Science.gov (United States)

    Fayad, Hadi; Gilles, Marlene; Pan, Tinsu; Visvikis, Dimitris

    2018-05-17

    Respiratory motion reduces the sensitivity and specificity of medical images especially in the thoracic and abdominal areas. It may affect applications such as cancer diagnostic imaging and/or radiation therapy (RT). Solutions to this issue include modeling of the respiratory motion in order to optimize both diagnostic and therapeutic protocols. Personalized motion modeling required patient-specific four-dimensional (4D) imaging which in the case of 4D computed tomography (4D CT) acquisition is associated with an increased dose. The goal of this work was to develop a global respiratory motion model capable of relating external patient surface motion to internal structure motion without the need for a patient-specific 4D CT acquisition. The proposed global model is based on principal component analysis and can be adjusted to a given patient anatomy using only one or two static CT images in conjunction with a respiratory synchronized patient external surface motion. It is based on the relation between the internal motion described using deformation fields obtained by registering 4D CT images and patient surface maps obtained either from optical imaging devices or extracted from CT image-based patient skin segmentation. 4D CT images of six patients were used to generate the global motion model which was validated by adapting it on four different patients having skin segmented surfaces and two other patients having time of flight camera acquired surfaces. The reproducibility of the proposed model was also assessed on two patients with two 4D CT series acquired within 2 weeks of each other. Profile comparison shows the efficacy of the global respiratory motion model and an improvement while using two CT images in order to adapt the model. This was confirmed by the correlation coefficient with a mean correlation of 0.9 and 0.95 while using one or two CT images respectively and when comparing acquired to model generated 4D CT images. For the four patients with segmented

  5. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  6. Image segmentation of overlapping leaves based on Chan–Vese model and Sobel operator

    Directory of Open Access Journals (Sweden)

    Zhibin Wang

    2018-03-01

    Full Text Available To improve the segmentation precision of overlapping crop leaves, this paper presents an effective image segmentation method based on the Chan–Vese model and Sobel operator. The approach consists of three stages. First, a feature that identifies hues with relatively high levels of green is used to extract the region of leaves and remove the background. Second, the Chan–Vese model and improved Sobel operator are implemented to extract the leaf contours and detect the edges, respectively. Third, a target leaf with a complex background and overlapping is extracted by combining the results obtained by the Chan–Vese model and Sobel operator. To verify the effectiveness of the proposed algorithm, a segmentation experiment was performed on 30 images of cucumber leaf. The mean error rate of the proposed method is 0.0428, which is a decrease of 6.54% compared with the mean error rate of the level set method. Experimental results show that the proposed method can accurately extract the target leaf from cucumber leaf images with complex backgrounds and overlapping regions.

  7. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  8. 1H NMR-based metabolic profiling reveals inherent biological variation in yeast and nematode model systems

    International Nuclear Information System (INIS)

    Szeto, Samuel S. W.; Reinke, Stacey N.; Lemire, Bernard D.

    2011-01-01

    The application of metabolomics to human and animal model systems is poised to provide great insight into our understanding of disease etiology and the metabolic changes that are associated with these conditions. However, metabolomic studies have also revealed that there is significant, inherent biological variation in human samples and even in samples from animal model systems where the animals are housed under carefully controlled conditions. This inherent biological variability is an important consideration for all metabolomics analyses. In this study, we examined the biological variation in 1 H NMR-based metabolic profiling of two model systems, the yeast Saccharomyces cerevisiae and the nematode Caenorhabditis elegans. Using relative standard deviations (RSD) as a measure of variability, our results reveal that both model systems have significant amounts of biological variation. The C. elegans metabolome possesses greater metabolic variance with average RSD values of 29 and 39%, depending on the food source that was used. The S. cerevisiae exometabolome RSD values ranged from 8% to 12% for the four strains examined. We also determined whether biological variation occurs between pairs of phenotypically identical yeast strains. Multivariate statistical analysis allowed us to discriminate between pair members based on their metabolic phenotypes. Our results highlight the variability of the metabolome that exists even for less complex model systems cultured under defined conditions. We also highlight the efficacy of metabolic profiling for defining these subtle metabolic alterations.

  9. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  10. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  11. Normal and Pathological NCAT Image and Phantom Data Based on Physiologically Realistic Left Ventricle Finite-Element Models

    International Nuclear Information System (INIS)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui, Benjamin M.W.; Gullberg, Grant T.

    2006-01-01

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, which provides a realistic model of the normal human anatomy and cardiac and respiratory motions, is used in medical imaging research to evaluate and improve imaging devices and techniques, especially dynamic cardiac applications. One limitation of the phantom is that it lacks the ability to accurately simulate altered functions of the heart that result from cardiac pathologies such as coronary artery disease (CAD). The goal of this work was to enhance the 4D NCAT phantom by incorporating a physiologically based, finite-element (FE) mechanical model of the left ventricle (LV) to simulate both normal and abnormal cardiac motions. The geometry of the FE mechanical model was based on gated high-resolution x-ray multi-slice computed tomography (MSCT) data of a healthy male subject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees at the epicardial surface, through 0 degrees at the mid-wall, to 90 degrees at the endocardial surface. A time varying elastance model was used to simulate fiber contraction, and physiological intraventricular systolic pressure-time curves were applied to simulate the cardiac motion over the entire cardiac cycle. To demonstrate the ability of the FE mechanical model to accurately simulate the normal cardiac motion as well abnormal motions indicative of CAD, a normal case and two pathologic cases were simulated and analyzed. In the first pathologic model, a subendocardial anterior ischemic region was defined. A second model was created with a transmural ischemic region defined in the same location. The FE based deformations were incorporated into the 4D NCAT cardiac model through the control points that define the cardiac structures in the phantom which were set to move according to the predictions of the mechanical model. A simulation study was performed using the FE-NCAT combination to investigate how the

  12. An image-based skeletal tissue model for the ICRP reference newborn

    Energy Technology Data Exchange (ETDEWEB)

    Pafundi, Deanna; Lee, Choonsik; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Watchman, Christopher; Bourke, Vincent [Department of Radiation Oncology, University of Arizona, Tucson, AZ (United States); Aris, John [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shagina, Natalia [Urals Research Center for Radiation Medicine, Chelyabinsk (Russian Federation); Harrison, John; Fell, Tim [Radiation Protection Division, Health Protection Agency, Chilton (United Kingdom)], E-mail: wbolch@ufl.edu

    2009-07-21

    Hybrid phantoms represent a third generation of computational models of human anatomy needed for dose assessment in both external and internal radiation exposures. Recently, we presented the first whole-body hybrid phantom of the ICRP reference newborn with a skeleton constructed from both non-uniform rational B-spline and polygon-mesh surfaces (Lee et al 2007 Phys. Med. Biol. 52 3309-33). The skeleton in that model included regions of cartilage and fibrous connective tissue, with the remainder given as a homogenous mixture of cortical and trabecular bone, active marrow and miscellaneous skeletal tissues. In the present study, we present a comprehensive skeletal tissue model of the ICRP reference newborn to permit a heterogeneous representation of the skeleton in that hybrid phantom set-both male and female-that explicitly includes a delineation of cortical bone so that marrow shielding effects are correctly modeled for low-energy photons incident upon the newborn skeleton. Data sources for the tissue model were threefold. First, skeletal site-dependent volumes of homogeneous bone were obtained from whole-cadaver CT image analyses. Second, selected newborn bone specimens were acquired at autopsy and subjected to micro-CT image analysis to derive model parameters of the marrow cavity and bone trabecular 3D microarchitecture. Third, data given in ICRP Publications 70 and 89 were selected to match reference values on total skeletal tissue mass. Active marrow distributions were found to be in reasonable agreement with those given previously by the ICRP. However, significant differences were seen in total skeletal and site-specific masses of trabecular and cortical bone between the current and ICRP newborn skeletal tissue models. The latter utilizes an age-independent ratio of 80%/20% cortical and trabecular bone for the reference newborn. In the current study, a ratio closer to 40%/60% is used based upon newborn CT and micro-CT skeletal image analyses. These changes in

  13. Neutron Imaging Reveals Internal Plant Hydraulic Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Warren, Jeffrey [ORNL; Bilheux, Hassina Z [ORNL; Kang, Misun [ORNL; Voisin, Sophie [ORNL; Cheng, Chu-Lin [ORNL; Horita, Jusuke [ORNL; Perfect, Edmund [ORNL

    2013-01-01

    Many terrestrial ecosystem processes are constrained by water availability and transport within the soil. Knowledge of plant water fluxes is thus critical for assessing mechanistic processes linked to biogeochemical cycles, yet resolution of root structure and xylem water transport dynamics has been a particularly daunting task for the ecologist. Through neutron imaging, we demonstrate the ability to non-invasively monitor individual root functionality and water fluxes within Zea mays L. (maize) and Panicum virgatum L. (switchgrass) seedlings growing in a sandy medium. Root structure and growth were readily imaged by neutron radiography and neutron computed tomography. Seedlings were irrigated with water or deuterium oxide and imaged through time as a growth lamp was cycled on to alter leaf demand for water. Sub-millimeter scale resolution reveals timing and magnitudes of root water uptake, redistribution within the roots, and root-shoot hydraulic linkages, relationships not well characterized by other techniques.

  14. Normal and Pathological NCAT Image and PhantomData Based onPhysiologically Realistic Left Ventricle Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui,Benjamin M.W.; Gullberg, Grant T.

    2006-08-02

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, whichprovides a realistic model of the normal human anatomy and cardiac andrespiratory motions, is used in medical imaging research to evaluate andimprove imaging devices and techniques, especially dynamic cardiacapplications. One limitation of the phantom is that it lacks the abilityto accurately simulate altered functions of the heart that result fromcardiac pathologies such as coronary artery disease (CAD). The goal ofthis work was to enhance the 4D NCAT phantom by incorporating aphysiologically based, finite-element (FE) mechanical model of the leftventricle (LV) to simulate both normal and abnormal cardiac motions. Thegeometry of the FE mechanical model was based on gated high-resolutionx-ray multi-slice computed tomography (MSCT) data of a healthy malesubject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees atthe epicardial surface, through 0 degreesat the mid-wall, to 90 degreesat the endocardial surface. A time varying elastance model was used tosimulate fiber contraction, and physiological intraventricular systolicpressure-time curves were applied to simulate the cardiac motion over theentire cardiac cycle. To demonstrate the ability of the FE mechanicalmodel to accurately simulate the normal cardiac motion as well abnormalmotions indicative of CAD, a normal case and two pathologic cases weresimulated and analyzed. In the first pathologic model, a subendocardialanterior ischemic region was defined. A second model was created with atransmural ischemic region defined in the same location. The FE baseddeformations were incorporated into the 4D NCAT cardiac model through thecontrol points that define the cardiac structures in the phantom whichwere set to move according to the predictions of the mechanical model. Asimulation study was performed using the FE-NCAT combination toinvestigate how the differences in contractile function

  15. Model-based image reconstruction for four-dimensional PET

    International Nuclear Information System (INIS)

    Li Tianfang; Thorndyke, Brian; Schreibmann, Eduard; Yang Yong; Xing Lei

    2006-01-01

    Positron emission tonography (PET) is useful in diagnosis and radiation treatment planning for a variety of cancers. For patients with cancers in thoracic or upper abdominal region, the respiratory motion produces large distortions in the tumor shape and size, affecting the accuracy in both diagnosis and treatment. Four-dimensional (4D) (gated) PET aims to reduce the motion artifacts and to provide accurate measurement of the tumor volume and the tracer concentration. A major issue in 4D PET is the lack of statistics. Since the collected photons are divided into several frames in the 4D PET scan, the quality of each reconstructed frame degrades as the number of frames increases. The increased noise in each frame heavily degrades the quantitative accuracy of the PET imaging. In this work, we propose a method to enhance the performance of 4D PET by developing a new technique of 4D PET reconstruction with incorporation of an organ motion model derived from 4D-CT images. The method is based on the well-known maximum-likelihood expectation-maximization (ML-EM) algorithm. During the processes of forward- and backward-projection in the ML-EM iterations, all projection data acquired at different phases are combined together to update the emission map with the aid of deformable model, the statistics is therefore greatly improved. The proposed algorithm was first evaluated with computer simulations using a mathematical dynamic phantom. Experiment with a moving physical phantom was then carried out to demonstrate the accuracy of the proposed method and the increase of signal-to-noise ratio over three-dimensional PET. Finally, the 4D PET reconstruction was applied to a patient case

  16. Diffraction enhanced imaging of a rat model of gastric acid aspiration pneumonitis.

    Science.gov (United States)

    Connor, Dean M; Zhong, Zhong; Foda, Hussein D; Wiebe, Sheldon; Parham, Christopher A; Dilmanian, F Avraham; Cole, Elodia B; Pisano, Etta D

    2011-12-01

    Diffraction-enhanced imaging (DEI) is a type of phase contrast x-ray imaging that has improved image contrast at a lower dose than conventional radiography for many imaging applications, but no studies have been done to determine if DEI might be useful for diagnosing lung injury. The goals of this study were to determine if DEI could differentiate between healthy and injured lungs for a rat model of gastric aspiration and to compare diffraction-enhanced images with chest radiographs. Radiographs and diffraction-enhanced chest images of adult Sprague Dawley rats were obtained before and 4 hours after the aspiration of 0.4 mL/kg of 0.1 mol/L hydrochloric acid. Lung damage was confirmed with histopathology. The radiographs and diffraction-enhanced peak images revealed regions of atelectasis in the injured rat lung. The diffraction-enhanced peak images revealed the full extent of the lung with improved clarity relative to the chest radiographs, especially in the portion of the lower lobe that extended behind the diaphragm on the anteroposterior projection. For a rat model of gastric acid aspiration, DEI is capable of distinguishing between a healthy and an injured lung and more clearly than radiography reveals the full extent of the lung and the lung damage. Copyright © 2011 AUR. All rights reserved.

  17. Software for medical image based phantom modelling

    International Nuclear Information System (INIS)

    Possani, R.G.; Massicano, F.; Coelho, T.S.; Yoriyaz, H.

    2011-01-01

    Latest treatment planning systems depends strongly on CT images, so the tendency is that the dosimetry procedures in nuclear medicine therapy be also based on images, such as magnetic resonance imaging (MRI) or computed tomography (CT), to extract anatomical and histological information, as well as, functional imaging or activities map as PET or SPECT. This information associated with the simulation of radiation transport software is used to estimate internal dose in patients undergoing treatment in nuclear medicine. This work aims to re-engineer the software SCMS, which is an interface software between the Monte Carlo code MCNP, and the medical images, that carry information from the patient in treatment. In other words, the necessary information contained in the images are interpreted and presented in a specific format to the Monte Carlo MCNP code to perform the simulation of radiation transport. Therefore, the user does not need to understand complex process of inputting data on MCNP, as the SCMS is responsible for automatically constructing anatomical data from the patient, as well as the radioactive source data. The SCMS was originally developed in Fortran- 77. In this work it was rewritten in an object-oriented language (JAVA). New features and data options have also been incorporated into the software. Thus, the new software has a number of improvements, such as intuitive GUI and a menu for the selection of the energy spectra correspondent to a specific radioisotope stored in a XML data bank. The new version also supports new materials and the user can specify an image region of interest for the calculation of absorbed dose. (author)

  18. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  19. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  20. Image-based reflectance conversion of ASTER and IKONOS ...

    African Journals Online (AJOL)

    Spectral signatures derived from different image-based models for ASTER and IKONOS were inspected visually as first departure. This was followed by comparison of the total accuracy and Kappa index computed from supervised classification of images that were derived from different image-based atmospheric correction ...

  1. AUGUSTO'S Sundial: Image-Based Modeling for Reverse Engeneering Purposes

    Science.gov (United States)

    Baiocchi, V.; Barbarella, M.; Del Pizzo, S.; Giannone, F.; Troisi, S.; Piccaro, C.; Marcantonio, D.

    2017-02-01

    A photogrammetric survey of a unique archaeological site is reported in this paper. The survey was performed using both a panoramic image-based solution and by classical procedure. The panoramic image-based solution was carried out employing a commercial solution: the Trimble V10 Imaging Rover (IR). Such instrument is an integrated cameras system that captures 360 degrees digital panoramas, composed of 12 images, with a single push. The direct comparison of the point clouds obtained with traditional photogrammetric procedure and V10 stations, using the same GCP coordinates has been carried out in Cloud Compare, open source software that can provide the comparison between two point clouds supplied by all the main statistical data. The site is a portion of the dial plate of the "Horologium Augusti" inaugurated in 9 B.C.E. in the area of Campo Marzio and still present intact in the same position, in a cellar of a building in Rome, around 7 meter below the present ground level.

  2. Imaging mass spectrometry reveals elevated nigral levels of dynorphin neuropeptides in L-DOPA-induced dyskinesia in rat model of Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Anna Ljungdahl

    Full Text Available L-DOPA-induced dyskinesia is a troublesome complication of L-DOPA pharmacotherapy of Parkinson's disease and has been associated with disturbed brain opioid transmission. However, so far the results of clinical and preclinical studies on the effects of opioids agonists and antagonists have been contradictory at best. Prodynorphin mRNA levels correlate well with the severity of dyskinesia in animal models of Parkinson's disease; however the identities of the actual neuroactive opioid effectors in their target basal ganglia output structures have not yet been determined. For the first time MALDI-TOF imaging mass spectrometry (IMS was used for unbiased assessment and topographical elucidation of prodynorphin-derived peptides in the substantia nigra of a unilateral rat model of Parkinson's disease and L-DOPA induced dyskinesia. Nigral levels of dynorphin B and alpha-neoendorphin strongly correlated with the severity of dyskinesia. Even if dynorphin peptide levels were elevated in both the medial and lateral part of the substantia nigra, MALDI IMS analysis revealed that the most prominent changes were localized to the lateral part of the substantia nigra. MALDI IMS is advantageous compared with traditional molecular methods, such as radioimmunoassay, in that neither the molecular identity analyzed, nor the specific localization needs to be predetermined. Indeed, MALDI IMS revealed that the bioconverted metabolite leu-enkephalin-arg also correlated positively with severity of dyskinesia. Multiplexing DynB and leu-enkephalin-arg ion images revealed small (0.25 by 0.5 mm nigral subregions with complementing ion intensities, indicating localized peptide release followed by bioconversion. The nigral dynorphins associated with L-DOPA-induced dyskinesia were not those with high affinity to kappa opioid receptors, but consisted of shorter peptides, mainly dynorphin B and alpha-neoendorphin that are known to bind and activate mu and delta opioid receptors

  3. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  4. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images

    Energy Technology Data Exchange (ETDEWEB)

    De Backer, A.; Bos, K.H.W. van den [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Van den Broek, W. [AG Strukturforschung/Elektronenmikroskopie, Institut für Physik, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin (Germany); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Van Aert, S., E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2016-12-15

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. - Highlights: • An efficient model-based method for quantitative electron microscopy is introduced. • Images are modelled as a superposition of 2D Gaussian peaks. • Overlap between neighbouring columns is taken into account. • Structure parameters can be obtained with the highest precision and accuracy. • StatSTEM, auser friendly program (GNU public license) is developed.

  5. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.

    Science.gov (United States)

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D

    2010-05-15

    Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.

  6. Physics-based shape matching for intraoperative image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Suwelack, Stefan, E-mail: suwelack@kit.edu; Röhl, Sebastian; Bodenstedt, Sebastian; Reichard, Daniel; Dillmann, Rüdiger; Speidel, Stefanie [Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Adenauerring 2, Karlsruhe 76131 (Germany); Santos, Thiago dos; Maier-Hein, Lena [Computer-assisted Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Wagner, Martin; Wünscher, Josephine; Kenngott, Hannes; Müller, Beat P. [General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, Heidelberg 69120 (Germany)

    2014-11-01

    Purpose: Soft-tissue deformations can severely degrade the validity of preoperative planning data during computer assisted interventions. Intraoperative imaging such as stereo endoscopic, time-of-flight or, laser range scanner data can be used to compensate these movements. In this context, the intraoperative surface has to be matched to the preoperative model. The shape matching is especially challenging in the intraoperative setting due to noisy sensor data, only partially visible surfaces, ambiguous shape descriptors, and real-time requirements. Methods: A novel physics-based shape matching (PBSM) approach to register intraoperatively acquired surface meshes to preoperative planning data is proposed. The key idea of the method is to describe the nonrigid registration process as an electrostatic–elastic problem, where an elastic body (preoperative model) that is electrically charged slides into an oppositely charged rigid shape (intraoperative surface). It is shown that the corresponding energy functional can be efficiently solved using the finite element (FE) method. It is also demonstrated how PBSM can be combined with rigid registration schemes for robust nonrigid registration of arbitrarily aligned surfaces. Furthermore, it is shown how the approach can be combined with landmark based methods and outline its application to image guidance in laparoscopic interventions. Results: A profound analysis of the PBSM scheme based on in silico and phantom data is presented. Simulation studies on several liver models show that the approach is robust to the initial rigid registration and to parameter variations. The studies also reveal that the method achieves submillimeter registration accuracy (mean error between 0.32 and 0.46 mm). An unoptimized, single core implementation of the approach achieves near real-time performance (2 TPS, 7–19 s total registration time). It outperforms established methods in terms of speed and accuracy. Furthermore, it is shown that the

  7. pyBSM: A Python package for modeling imaging systems

    Science.gov (United States)

    LeMaster, Daniel A.; Eismann, Michael T.

    2017-05-01

    There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.

  8. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model.

    Science.gov (United States)

    Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J

    2015-07-16

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm

    International Nuclear Information System (INIS)

    Pierce, Greg; Battista, Jerry; Wang, Kevin; Lee, Ting-Yim

    2012-01-01

    The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D

  10. Despeckling Polsar Images Based on Relative Total Variation Model

    Science.gov (United States)

    Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.

    2018-04-01

    Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.

  11. Live imaging-based model selection reveals periodic regulation of the stochastic G1/S phase transition in vertebrate axial development.

    Directory of Open Access Journals (Sweden)

    Mayu Sugiyama

    2014-12-01

    Full Text Available In multicellular organism development, a stochastic cellular response is observed, even when a population of cells is exposed to the same environmental conditions. Retrieving the spatiotemporal regulatory mode hidden in the heterogeneous cellular behavior is a challenging task. The G1/S transition observed in cell cycle progression is a highly stochastic process. By taking advantage of a fluorescence cell cycle indicator, Fucci technology, we aimed to unveil a hidden regulatory mode of cell cycle progression in developing zebrafish. Fluorescence live imaging of Cecyil, a zebrafish line genetically expressing Fucci, demonstrated that newly formed notochordal cells from the posterior tip of the embryonic mesoderm exhibited the red (G1 fluorescence signal in the developing notochord. Prior to their initial vacuolation, these cells showed a fluorescence color switch from red to green, indicating G1/S transitions. This G1/S transition did not occur in a synchronous manner, but rather exhibited a stochastic process, since a mixed population of red and green cells was always inserted between newly formed red (G1 notochordal cells and vacuolating green cells. We termed this mixed population of notochordal cells, the G1/S transition window. We first performed quantitative analyses of live imaging data and a numerical estimation of the probability of the G1/S transition, which demonstrated the existence of a posteriorly traveling regulatory wave of the G1/S transition window. To obtain a better understanding of this regulatory mode, we constructed a mathematical model and performed a model selection by comparing the results obtained from the models with those from the experimental data. Our analyses demonstrated that the stochastic G1/S transition window in the notochord travels posteriorly in a periodic fashion, with doubled the periodicity of the neighboring paraxial mesoderm segmentation. This approach may have implications for the characterization of

  12. Joint model of motion and anatomy for PET image reconstruction

    International Nuclear Information System (INIS)

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-01-01

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem

  13. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  14. Non-rigid image registration using bone growth model

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten; Gramkow, Claus; Kreiborg, Sven

    1997-01-01

    Non-rigid registration has traditionally used physical models like elasticity and fluids. These models are very seldom valid models of the difference between the registered images. This paper presents a non-rigid registration algorithm, which uses a model of bone growth as a model of the change...... between time sequence images of the human mandible. By being able to register the images, this paper at the same time contributes to the validation of the growth model, which is based on the currently available medical theories and knowledge...

  15. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  16. Satellite-based ET estimation using Landsat 8 images and SEBAL model

    Directory of Open Access Journals (Sweden)

    Bruno Bonemberger da Silva

    Full Text Available ABSTRACT Estimation of evapotranspiration is a key factor to achieve sustainable water management in irrigated agriculture because it represents water use of crops. Satellite-based estimations provide advantages compared to direct methods as lysimeters especially when the objective is to calculate evapotranspiration at a regional scale. The present study aimed to estimate the actual evapotranspiration (ET at a regional scale, using Landsat 8 - OLI/TIRS images and complementary data collected from a weather station. SEBAL model was used in South-West Paraná, region composed of irrigated and dry agricultural areas, native vegetation and urban areas. Five Landsat 8 images, row 223 and path 78, DOY 336/2013, 19/2014, 35/2014, 131/2014 and 195/2014 were used, from which ET at daily scale was estimated as a residual of the surface energy balance to produce ET maps. The steps for obtain ET using SEBAL include radiometric calibration, calculation of the reflectance, surface albedo, vegetation indexes (NDVI, SAVI and LAI and emissivity. These parameters were obtained based on the reflective bands of the orbital sensor with temperature surface estimated from thermal band. The estimated ET values in agricultural areas, native vegetation and urban areas using SEBAL algorithm were compatible with those shown in the literature and ET errors between the ET estimates from SEBAL model and Penman Monteith FAO 56 equation were less than or equal to 1.00 mm day-1.

  17. Detection of the Typical Pulse Condition on Cun-Guan-Chi Based on Image Sensor

    Directory of Open Access Journals (Sweden)

    Aihua ZHANG

    2014-02-01

    Full Text Available In order to simulate the diagnosis by feeling the pulse with Traditional Chinese Medicine, a device based on CCD was designed to detect the pulse image of Cun-Guan-Chi. Using the MM-3 pulse model as experimental subject, the synchronous pulse image data of some typical pulse condition were collected by this device on Cun-Guan-Chi. The typical pulses include the normal pulse, the slippery pulse, the slow pulse and the soft pulse. According to the lens imaging principle, the pulse waves were extracted by using the area method, then the 3D pulse condition image was restructured and some features were extracted including the period, the frequency, the width, and the length. The slippery pulse data of pregnant women were collected by this device, and the pulse images were analyzed. The results are consistent based on comparing the features of the slippery pulse model with the slippery pulse of pregnant women. This study overcame shortages of the existing detection device such as the few detecting parts and the limited information, and more comprehensive 3D pulse condition information could be obtained. This work laid a foundation for realizing the objective diagnosis and revealing the comprehensive information of the pulse.

  18. NEPHRUS: model of intelligent multilayers expert system for evaluation of the renal system based on scintigraphic images analysis

    International Nuclear Information System (INIS)

    Silva, Jose W.E. da; Schirru, Roberto; Boasquevisque, Edson M.

    1997-01-01

    This work develops a prototype for the system model based on Artificial Intelligence devices able to perform functions related to scintigraphic image analysis of the urinary system. Criteria used by medical experts for analysis images obtained with 99m Tc+DTPA and/or 99m Tc+DMSA were modeled and a multi resolution diagnosis technique was implemented. Special attention was given to the programs user interface design. Human Factor Engineering techniques were considered so as to ally friendliness and robustness. Results obtained using Artificial Neural Networks for the qualitative image analysis and the knowledge model constructed shows the feasibility of Artificial Intelligence implementation that use 'inherent' abilities of each technique in the resolution of diagnosis image analysis problems. (author). 12 refs., 2 figs., 2 tabs

  19. Methodological challenges of optical tweezers-based X-ray fluorescence imaging of biological model organisms at synchrotron facilities.

    Science.gov (United States)

    Vergucht, Eva; Brans, Toon; Beunis, Filip; Garrevoet, Jan; Bauters, Stephen; De Rijcke, Maarten; Deruytter, David; Janssen, Colin; Riekel, Christian; Burghammer, Manfred; Vincze, Laszlo

    2015-07-01

    Recently, a radically new synchrotron radiation-based elemental imaging approach for the analysis of biological model organisms and single cells in their natural in vivo state was introduced. The methodology combines optical tweezers (OT) technology for non-contact laser-based sample manipulation with synchrotron radiation confocal X-ray fluorescence (XRF) microimaging for the first time at ESRF-ID13. The optical manipulation possibilities and limitations of biological model organisms, the OT setup developments for XRF imaging and the confocal XRF-related challenges are reported. In general, the applicability of the OT-based setup is extended with the aim of introducing the OT XRF methodology in all research fields where highly sensitive in vivo multi-elemental analysis is of relevance at the (sub)micrometre spatial resolution level.

  20. Innovative biomagnetic imaging sensors for breast cancer: A model-based study

    International Nuclear Information System (INIS)

    Deng, Y.; Golkowski, M.

    2012-01-01

    Breast cancer is a serious potential health problem for all women and is the second leading cause of cancer deaths in the United States. The current screening procedures and imaging techniques, including x-ray mammography, clinical biopsy, ultrasound imaging, and magnetic resonance imaging, provide only 73% accuracy in detecting breast cancer. This gives the impetus to explore alternate techniques for imaging the breast and detecting early stage tumors. Among the complementary methods, the noninvasive biomagnetic breast imaging is attractive and promising, because both ionizing radiation and breast compressions that the prevalent x-ray mammography suffers from are avoided. It furthermore offers very high contrast because of the significant electromagnetic properties' differences between the cancerous, benign, and normal breast tissues. In this paper, a hybrid and accurate modeling tool for biomagnetic breast imaging is developed, which couples the electromagnetic and ultrasonic energies, and initial validations between the model predication and experimental findings are conducted.

  1. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  2. Voxel-based model construction from colored tomographic images

    International Nuclear Information System (INIS)

    Loureiro, Eduardo Cesar de Miranda

    2002-07-01

    This work presents a new approach in the construction of voxel-based phantoms that was implemented to simplify the segmentation process of organs and tissues reducing the time used in this procedure. The segmentation process is performed by painting tomographic images and attributing a different color for each organ or tissue. A voxel-based head and neck phantom was built using this new approach. The way as the data are stored allows an increasing in the performance of the radiation transport code. The program that calculates the radiation transport also works with image files. This capability allows image reconstruction showing isodose areas, under several points of view, increasing the information to the user. Virtual X-ray photographs can also be obtained allowing that studies could be accomplished looking for the radiographic techniques optimization assessing, at the same time, the doses in organs and tissues. The accuracy of the program here presented, called MCvoxEL, that implements this new approach, was tested by comparison to results from two modern and well-supported Monte Carlo codes. Dose conversion factors for parallel X-ray exposure were also calculated. (author)

  3. New Details of the Human Corneal Limbus Revealed With Second Harmonic Generation Imaging.

    Science.gov (United States)

    Park, Choul Yong; Lee, Jimmy K; Zhang, Cheng; Chuck, Roy S

    2015-09-01

    To report novel findings of the human corneal limbus by using second harmonic generation (SHG) imaging. Corneal limbus was imaged by using an inverted two-photon excitation fluorescence microscope. Laser (Ti:Sapphire) was tuned at 850 nm for two-photon excitation. Backscatter signals of SHG and autofluorescence (AF) were collected through a 425/30-nm emission filter and a 525/45-emission filter, respectively. Multiple, consecutive, and overlapping image stacks (z-stack) were acquired for the corneal limbal area. Two novel collagen structures were revealed by SHG imaging at the limbus: an anterior limbal cribriform layer and presumed anchoring fibers. Anterior limbal cribriform layer is an intertwined reticular collagen architecture just beneath the limbal epithelial niche and is located between the peripheral cornea and Tenon's/scleral tissue. Autofluorescence imaging revealed high vascularity in this structure. Central to the anterior limbal cribriform layer, radial strands of collagen were found to connect the peripheral cornea to the limbus. These presumed anchoring fibers have both collagen and elastin and were found more extensively in the superficial layers than deep layer and were absent in very deep limbus near Schlemm's canal. By using SHG imaging, new details of the collagen architecture of human corneal limbal area were elucidated. High resolution images with volumetric analysis revealed two novel collagen structures.

  4. Graphene-based ultrasonic detector for photoacoustic imaging

    Science.gov (United States)

    Yang, Fan; Song, Wei; Zhang, Chonglei; Fang, Hui; Min, Changjun; Yuan, Xiaocong

    2018-03-01

    Taking advantage of optical absorption imaging contrast, photoacoustic imaging technology is able to map the volumetric distribution of the optical absorption properties within biological tissues. Unfortunately, traditional piezoceramics-based transducers used in most photoacoustic imaging setups have inadequate frequency response, resulting in both poor depth resolution and inaccurate quantification of the optical absorption information. Instead of the piezoelectric ultrasonic transducer, we develop a graphene-based optical sensor for detecting photoacoustic pressure. The refractive index in the coupling medium is modulated due to photoacoustic pressure perturbation, which creates the variation of the polarization-sensitive optical absorption property of the graphene. As a result, the photoacoustic detection is realized through recording the reflectance intensity difference of polarization light. The graphene-based detector process an estimated noise-equivalentpressure (NEP) sensitivity of 550 Pa over 20-MHz bandwidth with a nearby linear pressure response from 11.0 kPa to 53.0 kPa. Further, a graphene-based photoacoustic microscopy is built, and non-invasively reveals the microvascular anatomy in mouse ears label-freely.

  5. Non-Rigid Contour-Based Registration of Cell Nuclei in 2-D Live Cell Microscopy Images Using a Dynamic Elasticity Model.

    Science.gov (United States)

    Sorokin, Dmitry V; Peterlik, Igor; Tektonidis, Marco; Rohr, Karl; Matula, Pavel

    2018-01-01

    The analysis of the pure motion of subnuclear structures without influence of the cell nucleus motion and deformation is essential in live cell imaging. In this paper, we propose a 2-D contour-based image registration approach for compensation of nucleus motion and deformation in fluorescence microscopy time-lapse sequences. The proposed approach extends our previous approach, which uses a static elasticity model to register cell images. Compared with that scheme, the new approach employs a dynamic elasticity model for the forward simulation of nucleus motion and deformation based on the motion of its contours. The contour matching process is embedded as a constraint into the system of equations describing the elastic behavior of the nucleus. This results in better performance in terms of the registration accuracy. Our approach was successfully applied to real live cell microscopy image sequences of different types of cells including image data that was specifically designed and acquired for evaluation of cell image registration methods. An experimental comparison with the existing contour-based registration methods and an intensity-based registration method has been performed. We also studied the dependence of the results on the choice of method parameters.

  6. Trajectory-based morphological operators: a model for efficient image processing.

    Science.gov (United States)

    Jimeno-Morenilla, Antonio; Pujol, Francisco A; Molina-Carmona, Rafael; Sánchez-Romero, José L; Pujol, Mar

    2014-01-01

    Mathematical morphology has been an area of intensive research over the last few years. Although many remarkable advances have been achieved throughout these years, there is still a great interest in accelerating morphological operations in order for them to be implemented in real-time systems. In this work, we present a new model for computing mathematical morphology operations, the so-called morphological trajectory model (MTM), in which a morphological filter will be divided into a sequence of basic operations. Then, a trajectory-based morphological operation (such as dilation, and erosion) is defined as the set of points resulting from the ordered application of the instant basic operations. The MTM approach allows working with different structuring elements, such as disks, and from the experiments, it can be extracted that our method is independent of the structuring element size and can be easily applied to industrial systems and high-resolution images.

  7. {sup 1}H NMR-based metabolic profiling reveals inherent biological variation in yeast and nematode model systems

    Energy Technology Data Exchange (ETDEWEB)

    Szeto, Samuel S. W.; Reinke, Stacey N.; Lemire, Bernard D., E-mail: bernard.lemire@ualberta.ca [University of Alberta, Department of Biochemistry, School of Molecular and Systems Medicine (Canada)

    2011-04-15

    The application of metabolomics to human and animal model systems is poised to provide great insight into our understanding of disease etiology and the metabolic changes that are associated with these conditions. However, metabolomic studies have also revealed that there is significant, inherent biological variation in human samples and even in samples from animal model systems where the animals are housed under carefully controlled conditions. This inherent biological variability is an important consideration for all metabolomics analyses. In this study, we examined the biological variation in {sup 1}H NMR-based metabolic profiling of two model systems, the yeast Saccharomyces cerevisiae and the nematode Caenorhabditis elegans. Using relative standard deviations (RSD) as a measure of variability, our results reveal that both model systems have significant amounts of biological variation. The C. elegans metabolome possesses greater metabolic variance with average RSD values of 29 and 39%, depending on the food source that was used. The S. cerevisiae exometabolome RSD values ranged from 8% to 12% for the four strains examined. We also determined whether biological variation occurs between pairs of phenotypically identical yeast strains. Multivariate statistical analysis allowed us to discriminate between pair members based on their metabolic phenotypes. Our results highlight the variability of the metabolome that exists even for less complex model systems cultured under defined conditions. We also highlight the efficacy of metabolic profiling for defining these subtle metabolic alterations.

  8. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Directory of Open Access Journals (Sweden)

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  9. A dynamic model-based approach to motion and deformation tracking of prosthetic valves from biplane x-ray images.

    Science.gov (United States)

    Wagner, Martin G; Hatt, Charles R; Dunkerley, David A P; Bodart, Lindsay E; Raval, Amish N; Speidel, Michael A

    2018-04-16

    Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures. © 2018 American Association of Physicists in Medicine.

  10. Device model for pixelless infrared image up-converters based on polycrystalline graphene heterostructures

    Science.gov (United States)

    Ryzhii, V.; Shur, M. S.; Ryzhii, M.; Karasik, V. E.; Otsuji, T.

    2018-01-01

    We developed a device model for pixelless converters of far/mid-infrared radiation (FIR/MIR) images into near-infrared/visible (NIR/VIR) images. These converters use polycrystalline graphene layers (PGLs) immersed in the van der Waals materials integrated with a light emitting diode (LED). The PGL serves as an element of the PGL infrared photodetector (PGLIP) sensitive to the incoming FIR/MIR due to the interband absorption. The spatially non-uniform photocurrent generated in the PGLIP repeats (mimics) the non-uniform distribution (image) created by the incident FIR/MIR. The injection of the nonuniform photocurrent into the LED active layer results in the nonuniform NIR/VIR image reproducing the FIR/MIR image. The PGL and the entire layer structure are not deliberately partitioned into pixels. We analyze the characteristics of such pixelless PGLIP-LED up-converters and show that their image contrast transfer function and the up-conversion efficiency depend on the PGL lateral resistivity. The up-converter exhibits high photoconductive gain and conversion efficiency when the lateral resistivity is sufficiently high. Several teams have successfully demonstrated the large area PGLs with the resistivities varying in a wide range. Such layers can be used in the pixelless PGLIP-LED image up-converters. The PGLIP-LED image up-converters can substantially surpass the image up-converters based on the quantum-well infrared photodetector integrated with the LED. These advantages are due to the use of the interband FIR/NIR absorption and a high photoconductive gain in the GLIPs.

  11. Visual guidance of forward flight in hummingbirds reveals control based on image features instead of pattern velocity.

    Science.gov (United States)

    Dakin, Roslyn; Fellows, Tyee K; Altshuler, Douglas L

    2016-08-02

    Information about self-motion and obstacles in the environment is encoded by optic flow, the movement of images on the eye. Decades of research have revealed that flying insects control speed, altitude, and trajectory by a simple strategy of maintaining or balancing the translational velocity of images on the eyes, known as pattern velocity. It has been proposed that birds may use a similar algorithm but this hypothesis has not been tested directly. We examined the influence of pattern velocity on avian flight by manipulating the motion of patterns on the walls of a tunnel traversed by Anna's hummingbirds. Contrary to prediction, we found that lateral course control is not based on regulating nasal-to-temporal pattern velocity. Instead, birds closely monitored feature height in the vertical axis, and steered away from taller features even in the absence of nasal-to-temporal pattern velocity cues. For vertical course control, we observed that birds adjusted their flight altitude in response to upward motion of the horizontal plane, which simulates vertical descent. Collectively, our results suggest that birds avoid collisions using visual cues in the vertical axis. Specifically, we propose that birds monitor the vertical extent of features in the lateral visual field to assess distances to the side, and vertical pattern velocity to avoid collisions with the ground. These distinct strategies may derive from greater need to avoid collisions in birds, compared with small insects.

  12. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  13. LINE-BASED MULTI-IMAGE MATCHING FOR FAÇADE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    T. A. Teo

    2012-07-01

    Full Text Available This research integrates existing LOD 2 building models and multiple close-range images for façade structural lines extraction. The major works are orientation determination and multiple image matching. In the orientation determination, Speeded Up Robust Features (SURF is applied to extract tie points automatically. Then, tie points and control points are combined for block adjustment. An object-based multi-images matching is proposed to extract the façade structural lines. The 2D lines in image space are extracted by Canny operator followed by Hough transform. The role of LOD 2 building models is to correct the tilt displacement of image from different views. The wall of LOD 2 model is also used to generate hypothesis planes for similarity measurement. Finally, average normalized cross correlation is calculated to obtain the best location in object space. The test images are acquired by a nonmetric camera Nikon D2X. The total number of image is 33. The experimental results indicate that the accuracy of orientation determination is about 1 pixel from 2515 tie points and 4 control points. It also indicates that line-based matching is more flexible than point-based matching.

  14. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  15. A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony

    Science.gov (United States)

    Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748

  16. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  17. Predictive modeling of outcomes following definitive chemoradiotherapy for oropharyngeal cancer based on FDG-PET image characteristics

    Science.gov (United States)

    Folkert, Michael R.; Setton, Jeremy; Apte, Aditya P.; Grkovski, Milan; Young, Robert J.; Schöder, Heiko; Thorstad, Wade L.; Lee, Nancy Y.; Deasy, Joseph O.; Oh, Jung Hun

    2017-07-01

    In this study, we investigate the use of imaging feature-based outcomes research (‘radiomics’) combined with machine learning techniques to develop robust predictive models for the risk of all-cause mortality (ACM), local failure (LF), and distant metastasis (DM) following definitive chemoradiation therapy (CRT). One hundred seventy four patients with stage III-IV oropharyngeal cancer (OC) treated at our institution with CRT with retrievable pre- and post-treatment 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) scans were identified. From pre-treatment PET scans, 24 representative imaging features of FDG-avid disease regions were extracted. Using machine learning-based feature selection methods, multiparameter logistic regression models were built incorporating clinical factors and imaging features. All model building methods were tested by cross validation to avoid overfitting, and final outcome models were validated on an independent dataset from a collaborating institution. Multiparameter models were statistically significant on 5 fold cross validation with the area under the receiver operating characteristic curve (AUC)  =  0.65 (p  =  0.004), 0.73 (p  =  0.026), and 0.66 (p  =  0.015) for ACM, LF, and DM, respectively. The model for LF retained significance on the independent validation cohort with AUC  =  0.68 (p  =  0.029) whereas the models for ACM and DM did not reach statistical significance, but resulted in comparable predictive power to the 5 fold cross validation with AUC  =  0.60 (p  =  0.092) and 0.65 (p  =  0.062), respectively. In the largest study of its kind to date, predictive features including increasing metabolic tumor volume, increasing image heterogeneity, and increasing tumor surface irregularity significantly correlated to mortality, LF, and DM on 5 fold cross validation in a relatively uniform single-institution cohort. The LF model also retained

  18. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    International Nuclear Information System (INIS)

    Duan, Jinming; Bai, Li; Tench, Christopher; Gottlob, Irene; Proudlock, Frank

    2015-01-01

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation. (paper)

  19. Modeling human faces with multi-image photogrammetry

    Science.gov (United States)

    D'Apuzzo, Nicola

    2002-03-01

    Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a

  20. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    Science.gov (United States)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  1. WE-E-17A-01: Characterization of An Imaging-Based Model of Tumor Angiogenesis

    International Nuclear Information System (INIS)

    Adhikarla, V; Jeraj, R

    2014-01-01

    Purpose: Understanding the transient dynamics of tumor oxygenation is important when evaluating tumor-vasculature response to anti-angiogenic therapies. An imaging-based tumor-vasculature model was used to elucidate factors that affect these dynamics. Methods: Tumor growth depends on its doubling time (Td). Hypoxia increases pro-angiogenic factor (VEGF) concentration which is modeled to reduce vessel perfusion, attributing to its effect of increasing vascular permeability. Perfused vessel recruitment depends on the existing perfused vasculature, VEGF concentration and maximum VEGF concentration (VEGFmax) for vessel dysfunction. A convolution-based algorithm couples the tumor to the normal tissue vessel density (VD-nt). The parameters are benchmarked to published pre-clinical data and a sensitivity study evaluating the changes in the peak and time to peak tumor oxygenation characterizes them. The model is used to simulate changes in hypoxia and proliferation PET imaging data obtained using [Cu- 61]Cu-ATSM and [F-18]FLT respectively. Results: Td and VD-nt were found to be the most influential on peak tumor pO2 while VEGFmax was marginally influential. A +20 % change in Td, VD-nt and VEGFmax resulted in +50%, +25% and +5% increase in peak pO2. In contrast, Td was the most influential on the time to peak oxygenation with VD-nt and VEGFmax playing marginal roles. A +20% change in Td, VD-nt and VEGFmax increased the time to peak pO2 by +50%, +5% and +0%. A −20% change in the above parameters resulted in comparable decreases in the peak and time to peak pO2. Model application to the PET data was able to demonstrate the voxel-specific changes in hypoxia of the imaged tumor. Conclusion: Tumor-specific doubling time and vessel density are important parameters to be considered when evaluating hypoxia transients. While the current model simulates the oxygen dynamics of an untreated tumor, incorporation of therapeutic effects can make the model a potent tool for analyzing

  2. Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images

    Science.gov (United States)

    Sánchez, Clara I.; Hornero, Roberto; Mayo, Agustín; García, María

    2009-02-01

    Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images is a difficult task due to the wide variability that these images usually present in screening programs. We propose a statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed algorithms.

  3. Anti-cancer agents in Saudi Arabian herbals revealed by automated high-content imaging

    KAUST Repository

    Hajjar, Dina

    2017-06-13

    Natural products have been used for medical applications since ancient times. Commonly, natural products are structurally complex chemical compounds that efficiently interact with their biological targets, making them useful drug candidates in cancer therapy. Here, we used cell-based phenotypic profiling and image-based high-content screening to study the mode of action and potential cellular targets of plants historically used in Saudi Arabia\\'s traditional medicine. We compared the cytological profiles of fractions taken from Juniperus phoenicea (Arar), Anastatica hierochuntica (Kaff Maryam), and Citrullus colocynthis (Hanzal) with a set of reference compounds with established modes of action. Cluster analyses of the cytological profiles of the tested compounds suggested that these plants contain possible topoisomerase inhibitors that could be effective in cancer treatment. Using histone H2AX phosphorylation as a marker for DNA damage, we discovered that some of the compounds induced double-strand DNA breaks. Furthermore, chemical analysis of the active fraction isolated from Juniperus phoenicea revealed possible anti-cancer compounds. Our results demonstrate the usefulness of cell-based phenotypic screening of natural products to reveal their biological activities.

  4. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    Science.gov (United States)

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  5. Physics-based deformable organisms for medical image analysis

    Science.gov (United States)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  6. Pixel-based meshfree modelling of skeletal muscles.

    Science.gov (United States)

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  7. Some practical considerations in finite element-based digital image correlation

    KAUST Repository

    Wang, Bo

    2015-04-20

    As an alternative to subset-based digital image correlation (DIC), finite element-based (FE-based) DIC method has gained increasing attention in the experimental mechanics community. However, the literature survey reveals that some important issues have not been well addressed in the published literatures. This work therefore aims to point out a few important considerations in the practical algorithm implementation of the FE-based DIC method, along with simple but effective solutions that can effectively tackle these issues. First, to better accommodate the possible intensity variations of the deformed images practically occurred in real experiments, a robust zero-mean normalized sum of squared difference criterion, instead of the commonly used sum of squared difference criterion, is introduced to quantify the similarity between reference and deformed elements in FE-based DIC. Second, to reduce the bias error induced by image noise and imperfect intensity interpolation, low-pass filtering of the speckle images with a 5×5 pixels Gaussian filter prior to correlation analysis, is presented. Third, to ensure the iterative calculation of FE-based DIC converges correctly and rapidly, an efficient subset-based DIC method, instead of simple integer-pixel displacement searching, is used to provide accurate initial guess of deformation for each calculation point. Also, the effects of various convergence criteria on the efficiency and accuracy of FE-based DIC are carefully examined, and a proper convergence criterion is recommended. The efficacy of these solutions is verified by numerical and real experiments. The results reveal that the improved FE-based DIC offers evident advantages over existing FE-based DIC method in terms of accuracy and efficiency. © 2015 Elsevier Ltd. All rights reserved.

  8. Comparing orbiter and rover image-based mapping of an ancient sedimentary environment, Aeolis Palus, Gale crater, Mars

    Science.gov (United States)

    Stack, Kathryn M.; Edwards, Christopher; Grotzinger, J. P.; Gupta, S.; Sumner, D.; Edgar, Lauren; Fraeman, A.; Jacob, S.; LeDeit, L.; Lewis, K.W.; Rice, M.S.; Rubin, D.; Calef, F.; Edgett, K.; Williams, R.M.E.; Williford, K.H.

    2016-01-01

    This study provides the first systematic comparison of orbital facies maps with detailed ground-based geology observations from the Mars Science Laboratory (MSL) Curiosity rover to examine the validity of geologic interpretations derived from orbital image data. Orbital facies maps were constructed for the Darwin, Cooperstown, and Kimberley waypoints visited by the Curiosity rover using High Resolution Imaging Science Experiment (HiRISE) images. These maps, which represent the most detailed orbital analysis of these areas to date, were compared with rover image-based geologic maps and stratigraphic columns derived from Curiosity’s Mast Camera (Mastcam) and Mars Hand Lens Imager (MAHLI). Results show that bedrock outcrops can generally be distinguished from unconsolidated surficial deposits in high-resolution orbital images and that orbital facies mapping can be used to recognize geologic contacts between well-exposed bedrock units. However, process-based interpretations derived from orbital image mapping are difficult to infer without known regional context or observable paleogeomorphic indicators, and layer-cake models of stratigraphy derived from orbital maps oversimplify depositional relationships as revealed from a rover perspective. This study also shows that fine-scale orbital image-based mapping of current and future Mars landing sites is essential for optimizing the efficiency and science return of rover surface operations.

  9. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  10. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  11. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  12. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  13. Image Re-Ranking Based on Topic Diversity.

    Science.gov (United States)

    Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng

    2017-08-01

    Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.

  14. Cryptanalysis of a chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Cokal, Cahit; Solak, Ercan

    2009-01-01

    A chaos-based image encryption algorithm was proposed in [Z.-H. Guan, F. Huang, W. Guan, Phys. Lett. A 346 (2005) 153]. In this Letter, we analyze the security weaknesses of the proposal. By applying chosen-plaintext and known-plaintext attacks, we show that all the secret parameters can be revealed

  15. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    Science.gov (United States)

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  16. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  17. Functional Brain Imaging Synthesis Based on Image Decomposition and Kernel Modeling: Application to Neurodegenerative Diseases

    Directory of Open Access Journals (Sweden)

    Francisco J. Martinez-Murcia

    2017-11-01

    Full Text Available The rise of neuroimaging in research and clinical practice, together with the development of new machine learning techniques has strongly encouraged the Computer Aided Diagnosis (CAD of different diseases and disorders. However, these algorithms are often tested in proprietary datasets to which the access is limited and, therefore, a direct comparison between CAD procedures is not possible. Furthermore, the sample size is often small for developing accurate machine learning methods. Multi-center initiatives are currently a very useful, although limited, tool in the recruitment of large populations and standardization of CAD evaluation. Conversely, we propose a brain image synthesis procedure intended to generate a new image set that share characteristics with an original one. Our system focuses on nuclear imaging modalities such as PET or SPECT brain images. We analyze the dataset by applying PCA to the original dataset, and then model the distribution of samples in the projected eigenbrain space using a Probability Density Function (PDF estimator. Once the model has been built, we can generate new coordinates on the eigenbrain space belonging to the same class, which can be then projected back to the image space. The system has been evaluated on different functional neuroimaging datasets assessing the: resemblance of the synthetic images with the original ones, the differences between them, their generalization ability and the independence of the synthetic dataset with respect to the original. The synthetic images maintain the differences between groups found at the original dataset, with no significant differences when comparing them to real-world samples. Furthermore, they featured a similar performance and generalization capability to that of the original dataset. These results prove that these images are suitable for standardizing the evaluation of CAD pipelines, and providing data augmentation in machine learning systems -e.g. in deep

  18. Construction of In Vivo Fluorescent Imaging of Echinococcus granulosus in a Mouse Model.

    Science.gov (United States)

    Wang, Sibo; Yang, Tao; Zhang, Xuyong; Xia, Jie; Guo, Jun; Wang, Xiaoyi; Hou, Jixue; Zhang, Hongwei; Chen, Xueling; Wu, Xiangwei

    2016-06-01

    Human hydatid disease (cystic echinococcosis, CE) is a chronic parasitic infection caused by the larval stage of the cestode Echinococcus granulosus. As the disease mainly affects the liver, approximately 70% of all identified CE cases are detected in this organ. Optical molecular imaging (OMI), a noninvasive imaging technique, has never been used in vivo with the specific molecular markers of CE. Thus, we aimed to construct an in vivo fluorescent imaging mouse model of CE to locate and quantify the presence of the parasites within the liver noninvasively. Drug-treated protoscolices were monitored after marking by JC-1 dye in in vitro and in vivo studies. This work describes for the first time the successful construction of an in vivo model of E. granulosus in a small living experimental animal to achieve dynamic monitoring and observation of multiple time points of the infection course. Using this model, we quantified and analyzed labeled protoscolices based on the intensities of their red and green fluorescence. Interestingly, the ratio of red to green fluorescence intensity not only revealed the location of protoscolices but also determined the viability of the parasites in vivo and in vivo tests. The noninvasive imaging model proposed in this work will be further studied for long-term detection and observation and may potentially be widely utilized in susceptibility testing and therapeutic effect evaluation.

  19. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  20. Cryptanalysis on an image block encryption algorithm based on spatiotemporal chaos

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; He Guo-Xiang

    2012-01-01

    An image block encryption scheme based on spatiotemporal chaos has been proposed recently. In this paper, we analyse the security weakness of the proposal. The main problem of the original scheme is that the generated keystream remains unchanged for encrypting every image. Based on the flaws, we demonstrate a chosen plaintext attack for revealing the equivalent keys with only 6 pairs of plaintext/ciphertext used. Finally, experimental results show the validity of our attack. (general)

  1. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Sung; Park, Kwang Suk [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea); Seoul National University College of Medicine, Department of Biomedical Engineering, Seoul (Korea); Ahn, Soon-Hyun; Oh, Seung Ha; Kim, Chong Sun; Chung, June-Key; Lee, Myung Chul [Seoul National University College of Medicine, Department of Otolaryngology, Head and Neck Surgery, Seoul (Korea); Lee, Dong Soo; Jeong, Jae Min [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea)

    2005-06-01

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1-{sup 14}C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in

  2. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Park, Kwang Suk; Ahn, Soon-Hyun; Oh, Seung Ha; Kim, Chong Sun; Chung, June-Key; Lee, Myung Chul; Lee, Dong Soo; Jeong, Jae Min

    2005-01-01

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1- 14 C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in any

  3. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  4. Predicting 3D pose in partially overlapped X-ray images of knee prostheses using model-based Roentgen stereophotogrammetric analysis (RSA).

    Science.gov (United States)

    Hsu, Chi-Pin; Lin, Shang-Chih; Shih, Kao-Shang; Huang, Chang-Hung; Lee, Chian-Her

    2014-12-01

    After total knee replacement, the model-based Roentgen stereophotogrammetric analysis (RSA) technique has been used to monitor the status of prosthetic wear, misalignment, and even failure. However, the overlap of the prosthetic outlines inevitably increases errors in the estimation of prosthetic poses due to the limited amount of available outlines. In the literature, quite a few studies have investigated the problems induced by the overlapped outlines, and manual adjustment is still the mainstream. This study proposes two methods to automate the image processing of overlapped outlines prior to the pose registration of prosthetic models. The outline-separated method defines the intersected points and segments the overlapped outlines. The feature-recognized method uses the point and line features of the remaining outlines to initiate registration. Overlap percentage is defined as the ratio of overlapped to non-overlapped outlines. The simulated images with five overlapping percentages are used to evaluate the robustness and accuracy of the proposed methods. Compared with non-overlapped images, overlapped images reduce the number of outlines available for model-based RSA calculation. The maximum and root mean square errors for a prosthetic outline are 0.35 and 0.04 mm, respectively. The mean translation and rotation errors are 0.11 mm and 0.18°, respectively. The errors of the model-based RSA results are increased when the overlap percentage is beyond about 9%. In conclusion, both outline-separated and feature-recognized methods can be seamlessly integrated to automate the calculation of rough registration. This can significantly increase the clinical practicability of the model-based RSA technique.

  5. Gallbladder shape extraction from ultrasound images using active contour models.

    Science.gov (United States)

    Ciecholewski, Marcin; Chochołowicz, Jakub

    2013-12-01

    Gallbladder function is routinely assessed using ultrasonographic (USG) examinations. In clinical practice, doctors very often analyse the gallbladder shape when diagnosing selected disorders, e.g. if there are turns or folds of the gallbladder, so extracting its shape from USG images using supporting software can simplify a diagnosis that is often difficult to make. The paper describes two active contour models: the edge-based model and the region-based model making use of a morphological approach, both designed for extracting the gallbladder shape from USG images. The active contour models were applied to USG images without lesions and to those showing specific disease units, namely, anatomical changes like folds and turns of the gallbladder as well as polyps and gallstones. This paper also presents modifications of the edge-based model, such as the method for removing self-crossings and loops or the method of dampening the inflation force which moves nodes if they approach the edge being determined. The user is also able to add a fragment of the approximated edge beyond which neither active contour model will move if this edge is incomplete in the USG image. The modifications of the edge-based model presented here allow more precise results to be obtained when extracting the shape of the gallbladder from USG images than if the morphological model is used. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.

  6. A neural network detection model of spilled oil based on the texture analysis of SAR image

    Science.gov (United States)

    An, Jubai; Zhu, Lisong

    2006-01-01

    A Radial Basis Function Neural Network (RBFNN) Model is investigated for the detection of spilled oil based on the texture analysis of SAR imagery. In this paper, to take the advantage of the abundant texture information of SAR imagery, the texture features are extracted by both wavelet transform and the Gray Level Co-occurrence matrix. The RBFNN Model is fed with a vector of these texture features. The RBFNN Model is trained and tested by the sample data set of the feature vectors. Finally, a SAR image is classified by this model. The classification results of a spilled oil SAR image show that the classification accuracy for oil spill is 86.2 by the RBFNN Model using both wavelet texture and gray texture, while the classification accuracy for oil spill is 78.0 by same RBFNN Model using only wavelet texture as the input of this RBFNN model. The model using both wavelet transform and the Gray Level Co-occurrence matrix is more effective than that only using wavelet texture. Furthermore, it keeps the complicated proximity and has a good performance of classification.

  7. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  8. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  9. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    Science.gov (United States)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  10. POLARIZATION IMAGING AND SCATTERING MODEL OF CANCEROUS LIVER TISSUES

    Directory of Open Access Journals (Sweden)

    DONGZHI LI

    2013-07-01

    Full Text Available We apply different polarization imaging techniques for cancerous liver tissues, and compare the relative contrasts for difference polarization imaging (DPI, degree of polarization imaging (DOPI and rotating linear polarization imaging (RLPI. Experimental results show that a number of polarization imaging parameters are capable of differentiating cancerous cells in isotropic liver tissues. To analyze the contrast mechanism of the cancer-sensitive polarization imaging parameters, we propose a scattering model containing two types of spherical scatterers and carry on Monte Carlo simulations based on this bi-component model. Both the experimental and Monte Carlo simulated results show that the RLPI technique can provide a good imaging contrast of cancerous tissues. The bi-component scattering model provides a useful tool to analyze the contrast mechanism of polarization imaging of cancerous tissues.

  11. Solid models for CT/MR image display

    International Nuclear Information System (INIS)

    ManKovich, N.J.; Yue, A.; Kioumehr, F.; Ammirati, M.; Turner, S.

    1991-01-01

    Medical imaging can now take wider advantage of Computer-Aided-Manufacturing through rapid prototyping technologies (RPT) such as stereolithography, laser sintering, and laminated object manufacturing to directly produce solid models of patient anatomy from processed CT and MR images. While conventional surgical planning relies on consultation with the radiologist combined with direct reading and measurement of CT and MR studies, 3-D surface and volumetric display workstations are providing a more easily interpretable view of patient anatomy. RPT can provide the surgeon with a life size model of patient anatomy constructed layer by layer with full internal detail. The authors have developed a prototype image processing and model fabrication system based on stereolithography, which provides the neurosurgeon with models of the skull base. Parallel comparison of the mode with the original thresholded CT data and with a CRT displayed surface rendering showed that both have an accuracy of >99.6 percent. The measurements on the surface rendered display proved more difficult to exactly locate and yielded a standard deviation of 2.37 percent. This paper presents an accuracy study and discusses ways of assessing the quality of neurosurgical plans when 3-D models re made available as planning tools

  12. Multilevel binomial logistic prediction model for malignant pulmonary nodules based on texture features of CT image

    International Nuclear Information System (INIS)

    Wang Huan; Guo Xiuhua; Jia Zhongwei; Li Hongkai; Liang Zhigang; Li Kuncheng; He Qian

    2010-01-01

    Purpose: To introduce multilevel binomial logistic prediction model-based computer-aided diagnostic (CAD) method of small solitary pulmonary nodules (SPNs) diagnosis by combining patient and image characteristics by textural features of CT image. Materials and methods: Describe fourteen gray level co-occurrence matrix textural features obtained from 2171 benign and malignant small solitary pulmonary nodules, which belongs to 185 patients. Multilevel binomial logistic model is applied to gain these initial insights. Results: Five texture features, including Inertia, Entropy, Correlation, Difference-mean, Sum-Entropy, and age of patients own aggregating character on patient-level, which are statistically different (P < 0.05) between benign and malignant small solitary pulmonary nodules. Conclusion: Some gray level co-occurrence matrix textural features are efficiently descriptive features of CT image of small solitary pulmonary nodules, which can profit diagnosis of earlier period lung cancer if combined patient-level characteristics to some extent.

  13. Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning

    Directory of Open Access Journals (Sweden)

    Ye Yao

    2018-04-01

    Full Text Available Computer-generated graphics (CGs are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1 the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2 the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75.

  14. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  15. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    Science.gov (United States)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  16. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  17. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  18. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  19. Image Quality Improvement on OpenGL-Based Animations by Using CUDA Architecture

    Directory of Open Access Journals (Sweden)

    Taner UÇKAN

    2016-04-01

    Full Text Available 2D or 3D rendering technology is used for graphically modelling many physical phenomena occurring in real life by means of the computers. On the other hand, the ever-increasing intensity of the graphics applications require that the image quality of the so-called modellings is enhanced and they are performed more quickly. In this direction, a new software and hardware-based architecture called CUDA has been introduced by Nvidia at the end of 2006. Thanks to this architecture, larger number of graphics processors has started contributing towards the parallel solutions of the general-purpose problems. In this study, this new parallel computing architecture is taken into consideration and an animation application consisting of humanoid robots with different behavioral characteristics is developed using the OpenGL library in C++. This animation is initially implemented on a single serial CPU and then parallelized using the CUDA architecture. Eventually, the serial and the parallel versions of the same animation are compared against each other on the basis of the number of image frames per second. The results reveal that the parallel application is by far the best yielding high quality images.

  20. CT radiation dose and image quality optimization using a porcine model.

    Science.gov (United States)

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2013-01-01

    To evaluate potential radiation dose savings and resultant image quality effects with regard to optimization of commonly performed computed tomography (CT) studies derived from imaging a porcine (pig) model. Imaging protocols for 4 clinical CT suites were developed based on the lowest milliamperage and kilovoltage, the highest pitch that could be set from current imaging protocol parameters, or both. This occurred before significant changes in noise, contrast, and spatial resolution were measured objectively on images produced from a quality assurance CT phantom. The current and derived phantom protocols were then applied to scan a porcine model for head, abdomen, and chest CT studies. Further optimized protocols were developed based on the same methodology as in the phantom study. The optimization achieved with respect to radiation dose and image quality was evaluated following data collection of radiation dose recordings and image quality review. Relative visual grading analysis of image quality criteria adapted from the European guidelines on radiology quality criteria for CT were used for studies completed with both the phantom-based or porcine-derived imaging protocols. In 5 out of 16 experimental combinations, the current clinical protocol was maintained. In 2 instances, the phantom protocol reduced radiation dose by 19% to 38%. In the remaining 9 instances, the optimization based on the porcine model further reduced radiation dose by 17% to 38%. The porcine model closely reflects anatomical structures in humans, allowing the grading of anatomical criteria as part of image quality review without radiation risks to human subjects. This study demonstrates that using a porcine model to evaluate CT optimization resulted in more radiation dose reduction than when imaging protocols were tested solely on quality assurance phantoms.

  1. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  2. Vocal Tract Images Reveal Neural Representations of Sensorimotor Transformation During Speech Imitation

    Science.gov (United States)

    Carey, Daniel; Miquel, Marc E.; Evans, Bronwen G.; Adank, Patti; McGettigan, Carolyn

    2017-01-01

    Abstract Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants’ vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST. PMID:28334401

  3. A prospective gating method to acquire a diverse set of free-breathing CT images for model-based 4DCT

    Science.gov (United States)

    O'Connell, D.; Ruan, D.; Thomas, D. H.; Dou, T. H.; Lewis, J. H.; Santhanam, A.; Lee, P.; Low, D. A.

    2018-02-01

    Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5  ⩽  N  ⩽  9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or ‘spread’, of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5  ±  4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while

  4. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  5. WE-D-303-02: Applications of Volumetric Images Generated with a Respiratory Motion Model Based On An External Surrogate Signal

    International Nuclear Information System (INIS)

    Hurwitz, M; Williams, C; Dhou, S; Lewis, J; Mishra, P

    2015-01-01

    Purpose: Respiratory motion can vary significantly over the course of simulation and treatment. Our goal is to use volumetric images generated with a respiratory motion model to improve the definition of the internal target volume (ITV) and the estimate of delivered dose. Methods: Ten irregular patient breathing patterns spanning 35 seconds each were incorporated into a digital phantom. Ten images over the first five seconds of breathing were used to emulate a 4DCT scan, build the ITV, and generate a patient-specific respiratory motion model which correlated the measured trajectories of markers placed on the patients’ chests with the motion of the internal anatomy. This model was used to generate volumetric images over the subsequent thirty seconds of breathing. The increase in the ITV taking into account the full 35 seconds of breathing was assessed with ground-truth and model-generated images. For one patient, a treatment plan based on the initial ITV was created and the delivered dose was estimated using images from the first five seconds as well as ground-truth and model-generated images from the next 30 seconds. Results: The increase in the ITV ranged from 0.2 cc to 6.9 cc for the ten patients based on ground-truth information. The model predicted this increase in the ITV with an average error of 0.8 cc. The delivered dose to the tumor (D95) changed significantly from 57 Gy to 41 Gy when estimated using 5 seconds and 30 seconds, respectively. The model captured this effect, giving an estimated D95 of 44 Gy. Conclusion: A respiratory motion model generating volumetric images of the internal patient anatomy could be useful in estimating the increase in the ITV due to irregular breathing during simulation and in assessing delivered dose during treatment. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc. and Radiological Society of North America Research Scholar Grant #RSCH1206

  6. Modeling Image Patches with a Generic Dictionary of Mini-Epitomes

    Science.gov (United States)

    Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.

    2015-01-01

    The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859

  7. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  8. A simple method for detecting tumor in T2-weighted MRI brain images. An image-based analysis

    International Nuclear Information System (INIS)

    Lau, Phooi-Yee; Ozawa, Shinji

    2006-01-01

    The objective of this paper is to present a decision support system which uses a computer-based procedure to detect tumor blocks or lesions in digitized medical images. The authors developed a simple method with a low computation effort to detect tumors on T2-weighted Magnetic Resonance Imaging (MRI) brain images, focusing on the connection between the spatial pixel value and tumor properties from four different perspectives: cases having minuscule differences between two images using a fixed block-based method, tumor shape and size using the edge and binary images, tumor properties based on texture values using spatial pixel intensity distribution controlled by a global discriminate value, and the occurrence of content-specific tumor pixel for threshold images. Measurements of the following medical datasets were performed: different time interval images, and different brain disease images on single and multiple slice images. Experimental results have revealed that our proposed technique incurred an overall error smaller than those in other proposed methods. In particular, the proposed method allowed decrements of false alarm and missed alarm errors, which demonstrate the effectiveness of our proposed technique. In this paper, we also present a prototype system, known as PCB, to evaluate the performance of the proposed methods by actual experiments, comparing the detection accuracy and system performance. (author)

  9. VIP-Man: An image-based whole-body adult male model constructed from color photographs of the visible human project for multi-particle Monte Carlo calculations

    International Nuclear Information System (INIS)

    Xu, X.G.; Chao, T.C.; Bozkurt, A.

    2000-01-01

    Human anatomical models have been indispensable to radiation protection dosimetry using Monte Carlo calculations. Existing MIRD-based mathematical models are easy to compute and standardize, but they are simplified and crude compared to human anatomy. This article describes the development of an image-based whole-body model, called VIP-Man, using transversal color photographic images obtained from the National Library of Medicine's Visible Human Project for Monte Carlo organ dose calculations involving photons, electron, neutrons, and protons. As the first of a series of papers on dose calculations based on VIP-Man, this article provides detailed information about how to construct an image-based model, as well as how to adopt it into well-tested Monte Carlo codes, EGS4, MCNP4B, and MCNPX

  10. Polarization-dependent Imaging Contrast (PIC) mapping reveals nanocrystal orientation patterns in carbonate biominerals

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, Pupa U.P.A., E-mail: pupa@physics.wisc.edu [University of Wisconsin-Madison, Departments of Physics and Chemistry, Madison, WI 53706 (United States)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Nanocrystal orientation shown by Polarization-dependent Imaging Contrast (PIC) maps. Black-Right-Pointing-Pointer PIC-mapping of carbonate biominerals reveals their ultrastructure at the nanoscale. Black-Right-Pointing-Pointer The formation mechanisms of biominerals is discovered by PIC-mapping using PEEM. -- Abstract: Carbonate biominerals are one of the most interesting systems a physicist can study. They play a major role in the CO{sub 2} cycle, they master templation, self-assembly, nanofabrication, phase transitions, space filling, crystal nucleation and growth mechanisms. A new imaging modality was introduced in the last 5 years that enables direct observation of the orientation of carbonate single crystals, at the nano- and micro-scale. This is Polarization-dependent Imaging Contrast (PIC) mapping, which is based on X-ray linear dichroism, and uses PhotoElectron Emission spectroMicroscopy (PEEM). Here we present PIC-mapping results from biominerals, including the nacre and prismatic layers of mollusk shells, and sea urchin teeth. We describe various PIC-mapping approaches, and show that these lead to fundamental discoveries on the formation mechanisms of biominerals.

  11. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  12. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  13. RESEARCH ON FOREST FLAME RECOGNITION ALGORITHM BASED ON IMAGE FEATURE

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2017-09-01

    Full Text Available In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  14. Simulation of seagrass bed mapping by satellite images based on the radiative transfer model

    Science.gov (United States)

    Sagawa, Tatsuyuki; Komatsu, Teruhisa

    2015-06-01

    Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.

  15. Development and tests of a mouse voxel model dor MCNPX based on Digimouse images

    Energy Technology Data Exchange (ETDEWEB)

    Melo M, B.; Ferreira F, C. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Pte. Antonio Carlos No. 6627, Belo Horizonte 31270-901, Minas Gerais (Brazil); Garcia de A, I.; Machado T, B.; Passos Ribeiro de C, T., E-mail: bmm@cdtn.br [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Pte. Antonio Carlos 6627, Belo Horizonte 31270-901, Minas Gerais (Brazil)

    2015-10-15

    Mice have been widely used in experimental protocols involving ionizing radiation. Biological effects (Be) induced by radiation can compromise studies results. Good estimates of mouse whole body and organs absorbed dose could provide valuable information to researchers. The aim of this study was to create and test a new voxel phantom for mice dosimetry from -Digimouse- project images. Micro CT images from Digimouse project were used in this work. Corel PHOTOPAINT software was utilized in segmentation process. The three-dimensional (3-D) model assembly and its voxel size manipulation were performed by Image J. SISCODES was used to adapt the model to run in MCNPX Monte Carlo code. The resulting model was called DM{sub B}RA. The volume and mass of segmented organs were compared with data available in literature. For the preliminary tests the heart was considered the source organ. Photons of diverse energies were simulated and Saf values obtained through F6:p and + F6 MCNPX tallies. The results were compared with reference data. 3-D picturing of absorbed doses patterns and relative errors distribution were generated by a C++ -in house- made program and visualized through Amide software. The organ masses of DM{sub B}RA correlated well with two models that were based on same set of images. However some organs, like eyes and adrenals, skeleton and brain showed large discrepancies. Segmentation of an identical image set by different persons and/or methods can result significant organ masses variations. We believe that the main causes of these differences were: i) operator dependent subjectivity in the definition of organ limits during the segmentation processes; and i i) distinct voxel dimensions between evaluated models. Lack of reference data for mice models construction and dosimetry was detected. Comparison with other models originated from different mice strains also demonstrated that the anatomical and size variability can be significant. Use of + F6 tally for mouse

  16. Development and tests of a mouse voxel model dor MCNPX based on Digimouse images

    International Nuclear Information System (INIS)

    Melo M, B.; Ferreira F, C.; Garcia de A, I.; Machado T, B.; Passos Ribeiro de C, T.

    2015-10-01

    Mice have been widely used in experimental protocols involving ionizing radiation. Biological effects (Be) induced by radiation can compromise studies results. Good estimates of mouse whole body and organs absorbed dose could provide valuable information to researchers. The aim of this study was to create and test a new voxel phantom for mice dosimetry from -Digimouse- project images. Micro CT images from Digimouse project were used in this work. Corel PHOTOPAINT software was utilized in segmentation process. The three-dimensional (3-D) model assembly and its voxel size manipulation were performed by Image J. SISCODES was used to adapt the model to run in MCNPX Monte Carlo code. The resulting model was called DM B RA. The volume and mass of segmented organs were compared with data available in literature. For the preliminary tests the heart was considered the source organ. Photons of diverse energies were simulated and Saf values obtained through F6:p and + F6 MCNPX tallies. The results were compared with reference data. 3-D picturing of absorbed doses patterns and relative errors distribution were generated by a C++ -in house- made program and visualized through Amide software. The organ masses of DM B RA correlated well with two models that were based on same set of images. However some organs, like eyes and adrenals, skeleton and brain showed large discrepancies. Segmentation of an identical image set by different persons and/or methods can result significant organ masses variations. We believe that the main causes of these differences were: i) operator dependent subjectivity in the definition of organ limits during the segmentation processes; and i i) distinct voxel dimensions between evaluated models. Lack of reference data for mice models construction and dosimetry was detected. Comparison with other models originated from different mice strains also demonstrated that the anatomical and size variability can be significant. Use of + F6 tally for mouse phantoms

  17. Contrast-based sensorless adaptive optics for retinal imaging.

    Science.gov (United States)

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  18. A review of anisotropic conductivity models of brain white matter based on diffusion tensor imaging.

    Science.gov (United States)

    Wu, Zhanxiong; Liu, Yang; Hong, Ming; Yu, Xiaohui

    2018-06-01

    The conductivity of brain tissues is not only essential for electromagnetic source estimation (ESI), but also a key reflector of the brain functional changes. Different from the other brain tissues, the conductivity of whiter matter (WM) is highly anisotropic and a tensor is needed to describe it. The traditional electrical property imaging methods, such as electrical impedance tomography (EIT) and magnetic resonance electrical impedance tomography (MREIT), usually fail to image the anisotropic conductivity tensor of WM with high spatial resolution. The diffusion tensor imaging (DTI) is a newly developed technique that can fulfill this purpose. This paper reviews the existing anisotropic conductivity models of WM based on the DTI and discusses their advantages and disadvantages, as well as identifies opportunities for future research on this subject. It is crucial to obtain the linear conversion coefficient between the eigenvalues of anisotropic conductivity tensor and diffusion tensor, since they share the same eigenvectors. We conclude that the electrochemical model is suitable for ESI analysis because the conversion coefficient can be directly obtained from the concentration of ions in extracellular liquid and that the volume fraction model is appropriate to study the influence of WM structural changes on electrical conductivity. Graphical abstract ᅟ.

  19. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...

  20. Quantitative Assessment of Optical Coherence Tomography Imaging Performance with Phantom-Based Test Methods And Computational Modeling

    Science.gov (United States)

    Agrawal, Anant

    Optical coherence tomography (OCT) is a powerful medical imaging modality that uniquely produces high-resolution cross-sectional images of tissue using low energy light. Its clinical applications and technological capabilities have grown substantially since its invention about twenty years ago, but efforts have been limited to develop tools to assess performance of OCT devices with respect to the quality and content of acquired images. Such tools are important to ensure information derived from OCT signals and images is accurate and consistent, in order to support further technology development, promote standardization, and benefit public health. The research in this dissertation investigates new physical and computational models which can provide unique insights into specific performance characteristics of OCT devices. Physical models, known as phantoms, are fabricated and evaluated in the interest of establishing standardized test methods to measure several important quantities relevant to image quality. (1) Spatial resolution is measured with a nanoparticle-embedded phantom and model eye which together yield the point spread function under conditions where OCT is commonly used. (2) A multi-layered phantom is constructed to measure the contrast transfer function along the axis of light propagation, relevant for cross-sectional imaging capabilities. (3) Existing and new methods to determine device sensitivity are examined and compared, to better understand the detection limits of OCT. A novel computational model based on the finite-difference time-domain (FDTD) method, which simulates the physics of light behavior at the sub-microscopic level within complex, heterogeneous media, is developed to probe device and tissue characteristics influencing the information content of an OCT image. This model is first tested in simple geometric configurations to understand its accuracy and limitations, then a highly realistic representation of a biological cell, the retinal

  1. Ground-based infrared surveys: imaging the thermal fields at volcanoes and revealing the controlling parameters.

    Science.gov (United States)

    Pantaleo, Michele; Walter, Thomas

    2013-04-01

    Temperature monitoring is a widespread procedure in the frame of volcano hazard monitoring. Indeed temperature changes are expected to reflect changes in volcanic activity. We propose a new approach, within the thermal monitoring, which is meant to shed light on the parameters controlling the fluid pathways and the fumarole sites by using infrared measurements. Ground-based infrared cameras allow one to remotely image the spatial distribution, geometric pattern and amplitude of fumarole fields on volcanoes at metre to centimetre resolution. Infrared mosaics and time series are generated and interpreted, by integrating geological field observations and modeling, to define the setting of the volcanic degassing system at shallow level. We present results for different volcano morphologies and show that lithology, structures and topography control the appearance of fumarole field by the creation of permeability contrasts. We also show that the relative importance of those parameters is site-dependent. Deciphering the setting of the degassing system is essential for hazard assessment studies because it would improve our understanding on how the system responds to endogenous or exogenous modification.

  2. Hyperspectral Imaging Coupled with Random Frog and Calibration Models for Assessment of Total Soluble Solids in Mulberries

    Directory of Open Access Journals (Sweden)

    Yan-Ru Zhao

    2015-01-01

    Full Text Available Chemometrics methods coupled with hyperspectral imaging technology in visible and near infrared (Vis/NIR region (380–1030 nm were introduced to assess total soluble solids (TSS in mulberries. Hyperspectral images of 310 mulberries were acquired by hyperspectral reflectance imaging system (512 bands and their corresponding TSS contents were measured by a Brix meter. Random frog (RF method was used to select important wavelengths from the full wavelengths. TSS values in mulberry fruits were predicted by partial least squares regression (PLSR and least-square support vector machine (LS-SVM models based on full wavelengths and the selected important wavelengths. The optimal PLSR model with 23 important wavelengths was employed to visualise the spatial distribution of TSS in tested samples, and TSS concentrations in mulberries were revealed through the TSS spatial distribution. The results declared that hyperspectral imaging is promising for determining the spatial distribution of TSS content in mulberry fruits, which provides a reference for detecting the internal quality of fruits.

  3. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    Science.gov (United States)

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  4. Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions.

    Science.gov (United States)

    Wang, Kun; Schoonover, Robert W; Su, Richard; Oraevsky, Alexander; Anastasio, Mark A

    2014-05-01

    Optoacoustic tomography (OAT), also known as photoacoustic tomography, is an emerging computed biomedical imaging modality that exploits optical contrast and ultrasonic detection principles. Iterative image reconstruction algorithms that are based on discrete imaging models are actively being developed for OAT due to their ability to improve image quality by incorporating accurate models of the imaging physics, instrument response, and measurement noise. In this work, we investigate the use of discrete imaging models based on Kaiser-Bessel window functions for iterative image reconstruction in OAT. A closed-form expression for the pressure produced by a Kaiser-Bessel function is calculated, which facilitates accurate computation of the system matrix. Computer-simulation and experimental studies are employed to demonstrate the potential advantages of Kaiser-Bessel function-based iterative image reconstruction in OAT.

  5. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  6. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  7. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  8. Complex adaptation-based LDR image rendering for 3D image reconstruction

    Science.gov (United States)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  9. Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude

    Science.gov (United States)

    Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.

    2018-05-01

    One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.

  10. Image-based compound profiling reveals a dual inhibitor of tyrosine kinase and microtubule polymerization

    OpenAIRE

    Tanabe, Kenji

    2016-01-01

    Small-molecule compounds are widely used as biological research tools and therapeutic drugs. Therefore, uncovering novel targets of these compounds should provide insights that are valuable in both basic and clinical studies. I developed a method for image-based compound profiling by quantitating the effects of compounds on signal transduction and vesicle trafficking of epidermal growth factor receptor (EGFR). Using six signal transduction molecules and two markers of vesicle trafficking, 570...

  11. CT/FMT dual-model imaging of breast cancer based on peptide-lipid nanoparticles

    Science.gov (United States)

    Xu, Guoqiang; Lin, Qiaoya; Lian, Lichao; Qian, Yuan; Lu, Lisen; Zhang, Zhihong

    2016-03-01

    Breast cancer is one of the most harmful cancers in human. Its early diagnosis is expected to improve the patients' survival rate. X-ray computed tomography (CT) has been widely used in tumor detection for obtaining three-dimentional information. Fluorescence Molecular Tomography (FMT) imaging combined with near-infrared fluorescent dyes provides a powerful tool for the acquisition of molecular biodistribution information in deep tissues. Thus, the combination of CT and FMT imaging modalities allows us to better differentiate diseased tissues from normal tissues. Here we developed a tumor-targeting nanoparticle for dual-modality imaging based on a biocompatible HDL-mimicking peptide-phospholipid scaffold (HPPS) nanocarrier. By incorporation of CT contrast agents (iodinated oil) and far-infrared fluorescent dyes (DiR-BOA) into the hydrophobic core of HPPS, we obtained the FMT and CT signals simultaneously. Increased accumulation of the nanoparticles in the tumor lesions was achieved through the effect of the tumor-targeting peptide on the surface of nanoparticle. It resulted in excellent contrast between lesions and normal tissues. Together, the abilities to sensitively separate the lesions from adjacent normal tissues with the aid of a FMT/CT dual-model imaging approach make the targeting nanoparticles a useful tool for the diagnostics of breast cancer.

  12. A New Approach to Image-Based Estimation of Food Volume

    Directory of Open Access Journals (Sweden)

    Hamid Hassannejad

    2017-06-01

    Full Text Available A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people’s health. However, manual reporting of food intake has been shown to be inaccurate and often impractical. This paper presents a new approach to food intake quantity estimation using image-based modeling. The modeling method consists of three steps: firstly, a short video of the food is taken by the user’s smartphone. From such a video, six frames are selected based on the pictures’ viewpoints as determined by the smartphone’s orientation sensors. Secondly, the user marks one of the frames to seed an interactive segmentation algorithm. Segmentation is based on a Gaussian Mixture Model alongside the graph-cut algorithm. Finally, a customized image-based modeling algorithm generates a point-cloud to model the food. At the same time, a stochastic object-detection method locates a checkerboard used as size/ground reference. The modeling algorithm is optimized such that the use of six input images still results in an acceptable computation cost. In our evaluation procedure, we achieved an average accuracy of 92 % on a test set that includes images of different kinds of pasta and bread, with an average processing time of about 23 s.

  13. Pixel-based meshfree modelling of skeletal muscles

    OpenAIRE

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2015-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A ...

  14. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    Science.gov (United States)

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  15. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    Science.gov (United States)

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  16. Cloud top structure of Venus revealed by Subaru/COMICS mid-infrared images

    Science.gov (United States)

    Sato, T. M.; Sagawa, H.; Kouyama, T.; Mitsuyama, K.; Satoh, T.; Ohtsuki, S.; Ueno, M.; Kasaba, Y.; Nakamura, M.; Imamura, T.

    2014-11-01

    We have investigated the cloud top structure of Venus by analyzing ground-based images taken at the mid-infrared wavelengths of 8.66 μm and 11.34 μm. Venus at a solar phase angle of ∼90°, with the morning terminator in view, was observed by the Cooled Mid-Infrared Camera and Spectrometer (COMICS), mounted on the 8.2-m Subaru Telescope, during the period October 25-29, 2007. The disk-averaged brightness temperatures for the observation period are ∼230 K and ∼238 K at 8.66 μm and 11.34 μm, respectively. The obtained images with good signal-to-noise ratio and with high spatial resolution (∼200 km at the sub-observer point) provide several important findings. First, we present observational evidence, for the first time, of the possibility that the westward rotation of the polar features (the hot polar spots and the surrounding cold collars) is synchronized between the northern and southern hemispheres. Second, after high-pass filtering, the images reveal that streaks and mottled and patchy patterns are distributed over the entire disk, with typical amplitudes of ∼0.5 K, and vary from day to day. The detected features, some of which are similar to those seen in past UV images, result from inhomogeneities of both the temperature and the cloud top altitude. Third, the equatorial center-to-limb variations of brightness temperatures have a systematic day-night asymmetry, except those on October 25, that the dayside brightness temperatures are higher than the nightside brightness temperatures by 0-4 K under the same viewing geometry. Such asymmetry would be caused by the propagation of the migrating semidiurnal tide. Finally, by applying the lapse rates deduced from previous studies, we demonstrate that the equatorial center-to-limb curves in the two spectral channels give access to two parameters: the cloud scale height H and the cloud top altitude zc. The acceptable models for data on October 25 are obtained at H = 2.4-4.3 km and zc = 66-69 km; this supports

  17. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  18. Virtual Reality Model of the Three-Dimensional Anatomy of the Cavernous Sinus Based on a Cadaveric Image and Dissection.

    Science.gov (United States)

    Qian, Zeng-Hui; Feng, Xu; Li, Yang; Tang, Ke

    2018-01-01

    Studying the three-dimensional (3D) anatomy of the cavernous sinus is essential for treating lesions in this region with skull base surgeries. Cadaver dissection is a conventional method that has insurmountable flaws with regard to understanding spatial anatomy. The authors' research aimed to build an image model of the cavernous sinus region in a virtual reality system to precisely, individually and objectively elucidate the complete and local stereo-anatomy. Computed tomography and magnetic resonance imaging scans were performed on 5 adult cadaver heads. Latex mixed with contrast agent was injected into the arterial system and then into the venous system. Computed tomography scans were performed again following the 2 injections. Magnetic resonance imaging scans were performed again after the cranial nerves were exposed. Image data were input into a virtual reality system to establish a model of the cavernous sinus. Observation results of the image models were compared with those of the cadaver heads. Visualization of the cavernous sinus region models built using the virtual reality system was good for all the cadavers. High resolutions were achieved for the images of different tissues. The observed results were consistent with those of the cadaver head. The spatial architecture and modality of the cavernous sinus were clearly displayed in the 3D model by rotating the model and conveniently changing its transparency. A 3D virtual reality model of the cavernous sinus region is helpful for globally and objectively understanding anatomy. The observation procedure was accurate, convenient, noninvasive, and time and specimen saving.

  19. Canny edge-based deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-07

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  20. Superpixel-based segmentation of glottal area from videolaryngoscopy images

    Science.gov (United States)

    Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail

    2017-11-01

    Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.

  1. Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model

    Science.gov (United States)

    Ma, Ling; Lu, Guolan; Wang, Dongsheng; Wang, Xu; Chen, Zhuo Georgia; Muller, Susan; Chen, Amy; Fei, Baowei

    2017-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality that can provide a noninvasive tool for cancer detection and image-guided surgery. HSI acquires high-resolution images at hundreds of spectral bands, providing big data to differentiating different types of tissue. We proposed a deep learning based method for the detection of head and neck cancer with hyperspectral images. Since the deep learning algorithm can learn the feature hierarchically, the learned features are more discriminative and concise than the handcrafted features. In this study, we adopt convolutional neural networks (CNN) to learn the deep feature of pixels for classifying each pixel into tumor or normal tissue. We evaluated our proposed classification method on the dataset containing hyperspectral images from 12 tumor-bearing mice. Experimental results show that our method achieved an average accuracy of 91.36%. The preliminary study demonstrated that our deep learning method can be applied to hyperspectral images for detecting head and neck tumors in animal models.

  2. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  3. A novel airport extraction model based on saliency region detection for high spatial resolution remote sensing images

    Science.gov (United States)

    Lv, Wen; Zhang, Libao; Zhu, Yongchun

    2017-06-01

    The airport is one of the most crucial traffic facilities in military and civil fields. Automatic airport extraction in high spatial resolution remote sensing images has many applications such as regional planning and military reconnaissance. Traditional airport extraction strategies usually base on prior knowledge and locate the airport target by template matching and classification, which will cause high computation complexity and large costs of computing resources for high spatial resolution remote sensing images. In this paper, we propose a novel automatic airport extraction model based on saliency region detection, airport runway extraction and adaptive threshold segmentation. In saliency region detection, we choose frequency-tuned (FT) model for computing airport saliency using low level features of color and luminance that is easy and fast to implement and can provide full-resolution saliency maps. In airport runway extraction, Hough transform is adopted to count the number of parallel line segments. In adaptive threshold segmentation, the Otsu threshold segmentation algorithm is proposed to obtain more accurate airport regions. The experimental results demonstrate that the proposed model outperforms existing saliency analysis models and shows good performance in the extraction of the airport.

  4. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  5. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  6. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  7. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  8. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    Science.gov (United States)

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  9. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task.

    Science.gov (United States)

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-16

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.

  10. Molecular–Genetic Imaging: A Nuclear Medicine–Based Perspective

    Directory of Open Access Journals (Sweden)

    Ronald G. Blasberg

    2002-07-01

    Full Text Available Molecular imaging is a relatively new discipline, which developed over the past decade, initially driven by in situ reporter imaging technology. Noninvasive in vivo molecular–genetic imaging developed more recently and is based on nuclear (positron emission tomography [PET], gamma camera, autoradiography imaging as well as magnetic resonance (MR and in vivo optical imaging. Molecular–genetic imaging has its roots in both molecular biology and cell biology, as well as in new imaging technologies. The focus of this presentation will be nuclear-based molecular–genetic imaging, but it will comment on the value and utility of combining different imaging modalities. Nuclear-based molecular imaging can be viewed in terms of three different imaging strategies: (1 “indirect” reporter gene imaging; (2 “direct” imaging of endogenous molecules; or (3 “surrogate” or “bio-marker” imaging. Examples of each imaging strategy will be presented and discussed. The rapid growth of in vivo molecular imaging is due to the established base of in vivo imaging technologies, the established programs in molecular and cell biology, and the convergence of these disciplines. The development of versatile and sensitive assays that do not require tissue samples will be of considerable value for monitoring molecular–genetic and cellular processes in animal models of human disease, as well as for studies in human subjects in the future. Noninvasive imaging of molecular–genetic and cellular processes will complement established ex vivo molecular–biological assays that require tissue sampling, and will provide a spatial as well as a temporal dimension to our understanding of various diseases and disease processes.

  11. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    Directory of Open Access Journals (Sweden)

    Inhye Yoon

    2015-03-01

    Full Text Available Since incoming light to an unmanned aerial vehicle (UAV platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i image segmentation based on geometric classes; (ii generation of the context-adaptive transmission map; and (iii intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  12. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    Science.gov (United States)

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  13. Automated quantification and sizing of unbranched filamentous cyanobacteria by model based object oriented image analysis

    OpenAIRE

    Zeder, M; Van den Wyngaert, S; Köster, O; Felder, K M; Pernthaler, J

    2010-01-01

    Quantification and sizing of filamentous cyanobacteria in environmental samples or cultures are time-consuming and are often performed by using manual or semiautomated microscopic analysis. Automation of conventional image analysis is difficult because filaments may exhibit great variations in length and patchy autofluorescence. Moreover, individual filaments frequently cross each other in microscopic preparations, as deduced by modeling. This paper describes a novel approach based on object-...

  14. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    International Nuclear Information System (INIS)

    Christianson, O; Winslow, J; Samei, E

    2014-01-01

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using optical character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image

  15. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, O; Winslow, J; Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using optical character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image

  16. Fingerprint Image Enhancement Based on Second Directional Derivative of the Digital Image

    Directory of Open Access Journals (Sweden)

    Onnia Vesa

    2002-01-01

    Full Text Available This paper presents a novel approach of fingerprint image enhancement that relies on detecting the fingerprint ridges as image regions where the second directional derivative of the digital image is positive. A facet model is used in order to approximate the derivatives at each image pixel based on the intensity values of pixels located in a certain neighborhood. We note that the size of this neighborhood has a critical role in achieving accurate enhancement results. Using neighborhoods of various sizes, the proposed algorithm determines several candidate binary representations of the input fingerprint pattern. Subsequently, an output binary ridge-map image is created by selecting image zones, from the available binary image candidates, according to a MAP selection rule. Two public domain collections of fingerprint images are used in order to objectively assess the performance of the proposed fingerprint image enhancement approach.

  17. Construction of 3D MR image-based computer models of pathologic hearts, augmented with histology and optical fluorescence imaging to characterize action potential propagation.

    Science.gov (United States)

    Pop, Mihaela; Sermesant, Maxime; Liu, Garry; Relan, Jatin; Mansi, Tommaso; Soong, Alan; Peyrat, Jean-Marc; Truong, Michael V; Fefer, Paul; McVeigh, Elliot R; Delingette, Herve; Dick, Alexander J; Ayache, Nicholas; Wright, Graham A

    2012-02-01

    Cardiac computer models can help us understand and predict the propagation of excitation waves (i.e., action potential, AP) in healthy and pathologic hearts. Our broad aim is to develop accurate 3D MR image-based computer models of electrophysiology in large hearts (translatable to clinical applications) and to validate them experimentally. The specific goals of this paper were to match models with maps of the propagation of optical AP on the epicardial surface using large porcine hearts with scars, estimating several parameters relevant to macroscopic reaction-diffusion electrophysiological models. We used voltage-sensitive dyes to image AP in large porcine hearts with scars (three specimens had chronic myocardial infarct, and three had radiofrequency RF acute scars). We first analyzed the main AP waves' characteristics: duration (APD) and propagation under controlled pacing locations and frequencies as recorded from 2D optical images. We further built 3D MR image-based computer models that have information derived from the optical measures, as well as morphologic MRI data (i.e., myocardial anatomy, fiber directions and scar definition). The scar morphology from MR images was validated against corresponding whole-mount histology. We also compared the measured 3D isochronal maps of depolarization to simulated isochrones (the latter replicating precisely the experimental conditions), performing model customization and 3D volumetric adjustments of the local conductivity. Our results demonstrated that mean APD in the border zone (BZ) of the infarct scars was reduced by ~13% (compared to ~318 ms measured in normal zone, NZ), but APD did not change significantly in the thin BZ of the ablation scars. A generic value for velocity ratio (1:2.7) in healthy myocardial tissue was derived from measured values of transverse and longitudinal conduction velocities relative to fibers direction (22 cm/s and 60 cm/s, respectively). The model customization and 3D volumetric

  18. A medical imaging analysis system for trigger finger using an adaptive texture-based active shape model (ATASM in ultrasound images.

    Directory of Open Access Journals (Sweden)

    Bo-I Chuang

    Full Text Available Trigger finger has become a prevalent disease that greatly affects occupational activity and daily life. Ultrasound imaging is commonly used for the clinical diagnosis of trigger finger severity. Due to image property variations, traditional methods cannot effectively segment the finger joint's tendon structure. In this study, an adaptive texture-based active shape model method is used for segmenting the tendon and synovial sheath. Adapted weights are applied in the segmentation process to adjust the contribution of energy terms depending on image characteristics at different positions. The pathology is then determined according to the wavelet and co-occurrence texture features of the segmented tendon area. In the experiments, the segmentation results have fewer errors, with respect to the ground truth, than contours drawn by regular users. The mean values of the absolute segmentation difference of the tendon and synovial sheath are 3.14 and 4.54 pixels, respectively. The average accuracy of pathological determination is 87.14%. The segmentation results are all acceptable in data of both clear and fuzzy boundary cases in 74 images. And the symptom classifications of 42 cases are also a good reference for diagnosis according to the expert clinicians' opinions.

  19. Horror Image Recognition Based on Context-Aware Multi-Instance Learning.

    Science.gov (United States)

    Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng

    2015-12-01

    Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.

  20. Segmentation of laser range radar images using hidden Markov field models

    International Nuclear Information System (INIS)

    Pucar, P.

    1993-01-01

    Segmentation of images in the context of model based stochastic techniques is connected with high, very often unpracticle computational complexity. The objective with this thesis is to take the models used in model based image processing, simplify and use them in suboptimal, but not computationally demanding algorithms. Algorithms that are essentially one-dimensional, and their extensions to two dimensions are given. The model used in this thesis is the well known hidden Markov model. Estimation of the number of hidden states from observed data is a problem that is addressed. The state order estimation problem is of general interest and is not specifically connected to image processing. An investigation of three state order estimation techniques for hidden Markov models is given. 76 refs

  1. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing; Wang, B.; Lubineau, Gilles; Moussawi, Ali

    2015-01-01

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  2. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing

    2015-02-12

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  3. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  4. Multi-band Image Registration Method Based on Fourier Transform

    Institute of Scientific and Technical Information of China (English)

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  5. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images.

    Science.gov (United States)

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-06-22

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.

  6. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  7. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  8. Modelling of chromatic contrast for retrieval of wallpaper images

    OpenAIRE

    Gao, Xiaohong W.; Wang, Yuanlei; Qian, Yu; Gao, Alice

    2015-01-01

    Colour remains one of the key factors in presenting an object and consequently has been widely applied in retrieval of images based on their visual contents. However, a colour appearance changes with the change of viewing surroundings, the phenomenon that has not been paid attention yet while performing colour-based image retrieval. To comprehend this effect, in this paper, a chromatic contrast model, CAMcc, is developed for the application of retrieval of colour intensive images, cementing t...

  9. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2013-01-01

    Full Text Available A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method.

  10. Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies

    Science.gov (United States)

    Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo

    2017-09-01

    Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.

  11. Agent-based modeling of autophagy reveals emergent regulatory behavior of spatio-temporal autophagy dynamics.

    Science.gov (United States)

    Börlin, Christoph S; Lang, Verena; Hamacher-Brady, Anne; Brady, Nathan R

    2014-09-10

    Autophagy is a vesicle-mediated pathway for lysosomal degradation, essential under basal and stressed conditions. Various cellular components, including specific proteins, protein aggregates, organelles and intracellular pathogens, are targets for autophagic degradation. Thereby, autophagy controls numerous vital physiological and pathophysiological functions, including cell signaling, differentiation, turnover of cellular components and pathogen defense. Moreover, autophagy enables the cell to recycle cellular components to metabolic substrates, thereby permitting prolonged survival under low nutrient conditions. Due to the multi-faceted roles for autophagy in maintaining cellular and organismal homeostasis and responding to diverse stresses, malfunction of autophagy contributes to both chronic and acute pathologies. We applied a systems biology approach to improve the understanding of this complex cellular process of autophagy. All autophagy pathway vesicle activities, i.e. creation, movement, fusion and degradation, are highly dynamic, temporally and spatially, and under various forms of regulation. We therefore developed an agent-based model (ABM) to represent individual components of the autophagy pathway, subcellular vesicle dynamics and metabolic feedback with the cellular environment, thereby providing a framework to investigate spatio-temporal aspects of autophagy regulation and dynamic behavior. The rules defining our ABM were derived from literature and from high-resolution images of autophagy markers under basal and activated conditions. Key model parameters were fit with an iterative method using a genetic algorithm and a predefined fitness function. From this approach, we found that accurate prediction of spatio-temporal behavior required increasing model complexity by implementing functional integration of autophagy with the cellular nutrient state. The resulting model is able to reproduce short-term autophagic flux measurements (up to 3

  12. Brain MR image segmentation based on an improved active contour model.

    Directory of Open Access Journals (Sweden)

    Xiangrui Meng

    Full Text Available It is often a difficult task to accurately segment brain magnetic resonance (MR images with intensity in-homogeneity and noise. This paper introduces a novel level set method for simultaneous brain MR image segmentation and intensity inhomogeneity correction. To reduce the effect of noise, novel anisotropic spatial information, which can preserve more details of edges and corners, is proposed by incorporating the inner relationships among the neighbor pixels. Then the proposed energy function uses the multivariate Student's t-distribution to fit the distribution of the intensities of each tissue. Furthermore, the proposed model utilizes Hidden Markov random fields to model the spatial correlation between neigh-boring pixels/voxels. The means of the multivariate Student's t-distribution can be adaptively estimated by multiplying a bias field to reduce the effect of intensity inhomogeneity. In the end, we reconstructed the energy function to be convex and calculated it by using the Split Bregman method, which allows our framework for random initialization, thereby allowing fully automated applications. Our method can obtain the final result in less than 1 second for 2D image with size 256 × 256 and less than 300 seconds for 3D image with size 256 × 256 × 171. The proposed method was compared to other state-of-the-art segmentation methods using both synthetic and clinical brain MR images and increased the accuracies of the results more than 3%.

  13. Hyperspectral imaging technology for revealing the original handwritings covered by the same inks

    Directory of Open Access Journals (Sweden)

    Yuanyuan Lian

    2017-01-01

    Full Text Available This manuscript presents a preliminary investigation on the applicability of hyperspectral imaging technology for nondestructive and rapid analysis to reveal covered original handwritings. The hyperspectral imager Nuance-Macro was used to collect the reflected light signature of inks from the overlapping parts. The software Nuance1p46 was used to analyze the reflected light signature of inks which shows the covered original handwritings. Different types of black/blue ballpoint pen inks and black/blue gel pen inks were chosen for sample preparation. From the hyperspectral images examined, the covered original handwritings of application were revealed in 90.5%, 69.1%, 49.5%, and 78.6% of the cases. Further, the correlation between the revealing effect and spectral characteristics of the reflected light of inks at the overlapping parts was interpreted through theoretical analysis and experimental verification. The results indicated that when the spectral characteristics of the reflected light of inks at the overlapping parts were the same or very similar to that of the ink that was used to cover the original handwriting, the original handwriting could not be shown. On the contrary, when the spectral characteristics of the reflected light of inks at the overlapping parts were different to that of the ink that was used to cover the original handwriting, the original handwriting was revealed.

  14. Using a method based on Potts Model to segment a micro-CT image stack of trabecular bones of femoral region

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Pedro H.A. de; Cabral, Manuela O.M., E-mail: andrade.pha@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Engenharia Nuclear; Vieira, Jose W.; Correia, Filipe L. de B., E-mail: jose.wilson59@uol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil); Lima, Fernando R. De A., E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (brazil)

    2015-07-01

    Exposure Computational Models are composed basically of an anthropomorphic phantom, a Monte Carlo (MC) code, and an algorithm simulator of the radioactive source. Tomographic phantoms are developed from medical images and must be pre-processed and segmented before being coupled to a MC code (which simulates the interaction of radiation with matter). This work presents a methodology used for treatment of micro-CT images stack of a femur, obtained from a 30 year old female skeleton provided by the Imaging Laboratory for Anthropology of the University of Bristol, UK. These images contain resolution of 60 micrometers and from these a block containing only 160 x 60 x 160 pixels of trabecular tissues and bone marrow was cut and saved as ⁎.sgi file (header + ⁎.raw file). The Grupo de Dosimetria Numerica (Recife-PE, Brazil) developed a software named Digital Image Processing (DIP), in which a method for segmentation based on a physical model for particle interaction known as Potts Model (or q-Ising) was implemented. This model analyzes the statistical dependence between sites in a network. In Potts Model, when the values of spin variables at neighboring sites are identical, it is assigned an 'energy of interaction' between them. Otherwise, it is said that the sites do not interact. Making an analogy between network sites and the pixels of a digital image and, moreover, between the spins variables and the intensity of the gray scale, it was possible to apply this model to obtain texture descriptors and segment the image. (author)

  15. Using a method based on Potts Model to segment a micro-CT image stack of trabecular bones of femoral region

    International Nuclear Information System (INIS)

    Andrade, Pedro H.A. de; Cabral, Manuela O.M.; Lima, Fernando R. De A.

    2015-01-01

    Exposure Computational Models are composed basically of an anthropomorphic phantom, a Monte Carlo (MC) code, and an algorithm simulator of the radioactive source. Tomographic phantoms are developed from medical images and must be pre-processed and segmented before being coupled to a MC code (which simulates the interaction of radiation with matter). This work presents a methodology used for treatment of micro-CT images stack of a femur, obtained from a 30 year old female skeleton provided by the Imaging Laboratory for Anthropology of the University of Bristol, UK. These images contain resolution of 60 micrometers and from these a block containing only 160 x 60 x 160 pixels of trabecular tissues and bone marrow was cut and saved as ⁎.sgi file (header + ⁎.raw file). The Grupo de Dosimetria Numerica (Recife-PE, Brazil) developed a software named Digital Image Processing (DIP), in which a method for segmentation based on a physical model for particle interaction known as Potts Model (or q-Ising) was implemented. This model analyzes the statistical dependence between sites in a network. In Potts Model, when the values of spin variables at neighboring sites are identical, it is assigned an 'energy of interaction' between them. Otherwise, it is said that the sites do not interact. Making an analogy between network sites and the pixels of a digital image and, moreover, between the spins variables and the intensity of the gray scale, it was possible to apply this model to obtain texture descriptors and segment the image. (author)

  16. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  17. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  18. A method based on IHS cylindrical transform model for quality assessment of image fusion

    Science.gov (United States)

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  19. Routine magnetic resonance imaging for idiopathic olfactory loss: a modeling-based economic evaluation.

    Science.gov (United States)

    Rudmik, Luke; Smith, Kristine A; Soler, Zachary M; Schlosser, Rodney J; Smith, Timothy L

    2014-10-01

    Idiopathic olfactory loss is a common clinical scenario encountered by otolaryngologists. While trying to allocate limited health care resources appropriately, the decision to obtain a magnetic resonance imaging (MRI) scan to investigate for a rare intracranial abnormality can be difficult. To evaluate the cost-effectiveness of ordering routine MRI in patients with idiopathic olfactory loss. We performed a modeling-based economic evaluation with a time horizon of less than 1 year. Patients included in the analysis had idiopathic olfactory loss defined by no preceding viral illness or head trauma and negative findings of a physical examination and nasal endoscopy. Routine MRI vs no-imaging strategies. We developed a decision tree economic model from the societal perspective. Effectiveness, probability, and cost data were obtained from the published literature. Litigation rates and costs related to a missed diagnosis were obtained from the Physicians Insurers Association of America. A univariate threshold analysis and multivariate probabilistic sensitivity analysis were performed to quantify the degree of certainty in the economic conclusion of the reference case. The comparative groups included those who underwent routine MRI of the brain with contrast alone and those who underwent no brain imaging. The primary outcome was the cost per correct diagnosis of idiopathic olfactory loss. The mean (SD) cost for the MRI strategy totaled $2400.00 ($1717.54) and was effective 100% of the time, whereas the mean (SD) cost for the no-imaging strategy totaled $86.61 ($107.40) and was effective 98% of the time. The incremental cost-effectiveness ratio for the MRI strategy compared with the no-imaging strategy was $115 669.50, which is higher than most acceptable willingness-to-pay thresholds. The threshold analysis demonstrated that when the probability of having a treatable intracranial disease process reached 7.9%, the incremental cost-effectiveness ratio for MRI vs no

  20. Anthropometric body measurements based on multi-view stereo image reconstruction.

    Science.gov (United States)

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  1. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    Energy Technology Data Exchange (ETDEWEB)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Staton, R.; Pukala, J.; Manon, R. [Department of Radiation Oncology, M.D. Anderson Cancer Center, Orlando, 1440 South Orange Avenue, Orlando, Florida 32808 (United States)

    2015-01-15

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  2. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    International Nuclear Information System (INIS)

    Neylon, J.; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A.; Staton, R.; Pukala, J.; Manon, R.

    2015-01-01

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  3. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    Science.gov (United States)

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  4. GPR Imaging for Deeply Buried Objects: A Comparative Study Based on FDTD Models and Field Experiments

    Science.gov (United States)

    Tilley, roger; Dowla, Farid; Nekoogar, Faranak; Sadjadpour, Hamid

    2012-01-01

    Conventional use of Ground Penetrating Radar (GPR) is hampered by variations in background environmental conditions, such as water content in soil, resulting in poor repeatability of results over long periods of time when the radar pulse characteristics are kept the same. Target objects types might include voids, tunnels, unexploded ordinance, etc. The long-term objective of this work is to develop methods that would extend the use of GPR under various environmental and soil conditions provided an optimal set of radar parameters (such as frequency, bandwidth, and sensor configuration) are adaptively employed based on the ground conditions. Towards that objective, developing Finite Difference Time Domain (FDTD) GPR models, verified by experimental results, would allow us to develop analytical and experimental techniques to control radar parameters to obtain consistent GPR images with changing ground conditions. Reported here is an attempt at developing 20 and 3D FDTD models of buried targets verified by two different radar systems capable of operating over different soil conditions. Experimental radar data employed were from a custom designed high-frequency (200 MHz) multi-static sensor platform capable of producing 3-D images, and longer wavelength (25 MHz) COTS radar (Pulse EKKO 100) capable of producing 2-D images. Our results indicate different types of radar can produce consistent images.

  5. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    Science.gov (United States)

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  6. Three-dimensional digital imaging based on shifted point-array encoding.

    Science.gov (United States)

    Tian, Jindong; Peng, Xiang

    2005-09-10

    An approach to three-dimensional (3D) imaging based on shifted point-array encoding is presented. A kind of point-array structure light is projected sequentially onto the reference plane and onto the object surface to be tested and thus forms a pair of point-array images. A mathematical model is established to formulize the imaging process with the pair of point arrays. This formulation allows for a description of the relationship between the range image of the object surface and the lateral displacement of each point in the point-array image. Based on this model, one can reconstruct each 3D range image point by computing the lateral displacement of the corresponding point on the two point-array images. The encoded point array can be shifted digitally along both the lateral and the longitudinal directions step by step to achieve high spatial resolution. Experimental results show good agreement with the theoretical predictions. This method is applicable for implementing 3D imaging of object surfaces with complex topology or large height discontinuities.

  7. Changes in Male Rat Sexual Behavior and Brain Activity Revealed by Functional Magnetic Resonance Imaging in Response to Chronic Mild Stress.

    Science.gov (United States)

    Chen, Guotao; Yang, Baibing; Chen, Jianhuai; Zhu, Leilei; Jiang, Hesong; Yu, Wen; Zang, Fengchao; Chen, Yun; Dai, Yutian

    2018-02-01

    Non-organic erectile dysfunction (noED) at functional imaging has been related to abnormal brain activity and requires animal models for further research on the associated molecular mechanisms. To develop a noED animal model based on chronic mild stress and investigate brain activity changes. We used 6 weeks of chronic mild stress to induce depression. The sucrose consumption test was used to assess the hedonic state. The apomorphine test and sexual behavior test were used to select male rats with ED. Rats with depression and ED were considered to have noED. Blood oxygen level-dependent-based resting-state functional magnetic resonance imaging (fMRI) studies were conducted on these rats, and the amplitude of low-frequency fluctuations and functional connectivity were analyzed to determine brain activity changes. The sexual behavior test and resting-state fMRI were used for outcome measures. The induction of depression was confirmed by the sucrose consumption test. A low intromission ratio and increased mount and intromission latencies were observed in male rats with depression. No erection was observed in male rats with depression during the apomorphine test. Male rats with depression and ED were considered to have noED. The possible central pathologic mechanism shown by fMRI involved the amygdaloid body, dorsal thalamus, hypothalamus, caudate-putamen, cingulate gyrus, insular cortex, visual cortex, sensory cortex, motor cortex, and cerebellum. Similar findings have been found in humans. The present study provided a novel noED rat model for further research on the central mechanism of noED. The present study developed a novel noED rat model and analyzed brain activity changes based at fMRI. The observed brain activity alterations might not extend to humans. The present study developed a novel noED rat model with brain activity alterations related to sexual arousal and erection, which will be helpful for further research involving the central mechanism of noED. Chen

  8. OCML-based colour image encryption

    International Nuclear Information System (INIS)

    Rhouma, Rhouma; Meherzi, Soumaya; Belghith, Safya

    2009-01-01

    The chaos-based cryptographic algorithms have suggested some new ways to develop efficient image-encryption schemes. While most of these schemes are based on low-dimensional chaotic maps, it has been proposed recently to use high-dimensional chaos namely spatiotemporal chaos, which is modelled by one-way coupled-map lattices (OCML). Owing to their hyperchaotic behaviour, such systems are assumed to enhance the cryptosystem security. In this paper, we propose an OCML-based colour image encryption scheme with a stream cipher structure. We use a 192-bit-long external key to generate the initial conditions and the parameters of the OCML. We have made several tests to check the security of the proposed cryptosystem namely, statistical tests including histogram analysis, calculus of the correlation coefficients of adjacent pixels, security test against differential attack including calculus of the number of pixel change rate (NPCR) and unified average changing intensity (UACI), and entropy calculus. The cryptosystem speed is analyzed and tested as well.

  9. Model-based microwave image reconstruction: simulations and experiments

    International Nuclear Information System (INIS)

    Ciocan, Razvan; Jiang Huabei

    2004-01-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data

  10. Validation of an imageable surgical resection animal model of Glioblastoma (GBM).

    Science.gov (United States)

    Sweeney, Kieron J; Jarzabek, Monika A; Dicker, Patrick; O'Brien, Donncha F; Callanan, John J; Byrne, Annette T; Prehn, Jochen H M

    2014-08-15

    Glioblastoma (GBM) is the most common and malignant primary brain tumour having a median survival of just 12-18 months following standard therapy protocols. Local recurrence, post-resection and adjuvant therapy occurs in most cases. U87MG-luc2-bearing GBM xenografts underwent 4.5mm craniectomy and tumour resection using microsurgical techniques. The cranial defect was repaired using a novel modified cranial window technique consisting of a circular microscope coverslip held in place with glue. Immediate post-operative bioluminescence imaging (BLI) revealed a gross total resection rate of 75%. At censor point 4 weeks post-resection, Kaplan-Meier survival analysis revealed 100% survival in the surgical group compared to 0% in the non-surgical cohort (p=0.01). No neurological defects or infections in the surgical group were observed. GBM recurrence was reliably imaged using facile non-invasive optical bioluminescence (BLI) imaging with recurrence observed at week 4. For the first time, we have used a novel cranial defect repair method to extend and improve intracranial surgical resection methods for application in translational GBM rodent disease models. Combining BLI and the cranial window technique described herein facilitates non-invasive serial imaging follow-up. Within the current context we have developed a robust methodology for establishing a clinically relevant imageable GBM surgical resection model that appropriately mimics GBM recurrence post resection in patients. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Individualized directional microphone optimization in hearing aids based on reconstructing the 3D geometry of the head and ear from 2D images

    DEFF Research Database (Denmark)

    Harder, Stine

    head model based on 2D images, the second step is to simulate individual head related transfer functions (HRTFs) based on the estimated 3D head model and the final step is to calculate optimal directional filters based on the simulated HRTFs. The pipeline is employed on a Behind-The-Ear (BTE) hearing...... against non-individual directional filters revealed equally high Articulation-Index weighted Directivity Index (AI-DI) values for our specific test subject. However, measurements on other individuals indicate that the performance of the non-individual filters vary among subjects, and in particular...

  12. Fisheye image rectification using spherical and digital distortion models

    Science.gov (United States)

    Li, Xin; Pi, Yingdong; Jia, Yanling; Yang, Yuhui; Chen, Zhiyong; Hou, Wenguang

    2018-02-01

    Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.

  13. Implicit prosody mining based on the human eye image capture technology

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of

  14. Space-based infrared sensors of space target imaging effect analysis

    Science.gov (United States)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  15. A Feasibility Study of Photoacoustic Detection of Hidden Dental Caries Using a Fiber-Based Imaging System

    Directory of Open Access Journals (Sweden)

    Takuya Koyama

    2018-04-01

    Full Text Available In this paper, the feasibility of an optical fiber-based photoacoustic imaging system for detecting caries lesions inside a tooth is examined. Models of hidden caries were prepared using a pigment with an absorption spectrum similar to that of real caries lesions, and the occlusal surface of the model teeth containing the pigment was irradiated with laser pulses with a wavelength of 532 nm. An examination of the frequency spectra of the emitted photoacoustic waves revealed that the spectra from simulated caries lesions included frequency components in the range of 0.5–1.2 MHz that were not seen in the spectra from healthy parts of the teeth. This indicates that hidden caries can be detected via a photoacoustic imaging technique. Accordingly, an imaging system for clinical applications was fabricated. It consists of a bundle of hollow-optical fibers for laser radiation and an acoustic probe that is attached to the tooth surface. Results of ex vivo imaging experiments using model teeth and an extracted tooth with hidden caries lesions show that relatively large caries lesions inside teeth that are not seen in visual inspections can be detected by focusing on the above frequency components of the photoacoustic waves.

  16. An automated approach for segmentation of intravascular ultrasound images based on parametric active contour models

    International Nuclear Information System (INIS)

    Vard, Alireza; Jamshidi, Kamal; Movahhedinia, Naser

    2012-01-01

    This paper presents a fully automated approach to detect the intima and media-adventitia borders in intravascular ultrasound images based on parametric active contour models. To detect the intima border, we compute a new image feature applying a combination of short-term autocorrelations calculated for the contour pixels. These feature values are employed to define an energy function of the active contour called normalized cumulative short-term autocorrelation. Exploiting this energy function, the intima border is separated accurately from the blood region contaminated by high speckle noise. To extract media-adventitia boundary, we define a new form of energy function based on edge, texture and spring forces for the active contour. Utilizing this active contour, the media-adventitia border is identified correctly even in presence of branch openings and calcifications. Experimental results indicate accuracy of the proposed methods. In addition, statistical analysis demonstrates high conformity between manual tracing and the results obtained by the proposed approaches.

  17. Multiscale image-based modeling and simulation of gas flow and particle transport in the human lungs

    Science.gov (United States)

    Tawhai, Merryn H; Hoffman, Eric A

    2013-01-01

    Improved understanding of structure and function relationships in the human lungs in individuals and sub-populations is fundamentally important to the future of pulmonary medicine. Image-based measures of the lungs can provide sensitive indicators of localized features, however to provide a better prediction of lung response to disease, treatment and environment, it is desirable to integrate quantifiable regional features from imaging with associated value-added high-level modeling. With this objective in mind, recent advances in computational fluid dynamics (CFD) of the bronchial airways - from a single bifurcation symmetric model to a multiscale image-based subject-specific lung model - will be reviewed. The interaction of CFD models with local parenchymal tissue expansion - assessed by image registration - allows new understanding of the interplay between environment, hot spots where inhaled aerosols could accumulate, and inflammation. To bridge ventilation function with image-derived central airway structure in CFD, an airway geometrical modeling method that spans from the model ‘entrance’ to the terminal bronchioles will be introduced. Finally, the effects of turbulent flows and CFD turbulence models on aerosol transport and deposition will be discussed. CFD simulation of airflow and particle transport in the human lung has been pursued by a number of research groups, whose interest has been in studying flow physics and airways resistance, improving drug delivery, or investigating which populations are most susceptible to inhaled pollutants. The three most important factors that need to be considered in airway CFD studies are lung structure, regional lung function, and flow characteristics. Their correct treatment is important because the transport of therapeutic or pollutant particles is dependent on the characteristics of the flow by which they are transported; and the airflow in the lungs is dependent on the geometry of the airways and how ventilation

  18. A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Apisit Eiumnoh

    2013-10-01

    Full Text Available Traditionally, image registration of multi-modal and multi-temporal images is performed satisfactorily before land cover mapping. However, since multi-modal and multi-temporal images are likely to be obtained from different satellite platforms and/or acquired at different times, perfect alignment is very difficult to achieve. As a result, a proper land cover mapping algorithm must be able to correct registration errors as well as perform an accurate classification. In this paper, we propose a joint classification and registration technique based on a Markov random field (MRF model to simultaneously align two or more images and obtain a land cover map (LCM of the scene. The expectation maximization (EM algorithm is employed to solve the joint image classification and registration problem by iteratively estimating the map parameters and approximate posterior probabilities. Then, the maximum a posteriori (MAP criterion is used to produce an optimum land cover map. We conducted experiments on a set of four simulated images and one pair of remotely sensed images to investigate the effectiveness and robustness of the proposed algorithm. Our results show that, with proper selection of a critical MRF parameter, the resulting LCMs derived from an unregistered image pair can achieve an accuracy that is as high as when images are perfectly aligned. Furthermore, the registration error can be greatly reduced.

  19. Multispectral imaging reveals biblical-period inscription unnoticed for half a century.

    Directory of Open Access Journals (Sweden)

    Shira Faigenbaum-Golovin

    Full Text Available Most surviving biblical period Hebrew inscriptions are ostraca-ink-on-clay texts. They are poorly preserved and once unearthed, fade rapidly. Therefore, proper and timely documentation of ostraca is essential. Here we show a striking example of a hitherto invisible text on the back side of an ostracon revealed via multispectral imaging. This ostracon, found at the desert fortress of Arad and dated to ca. 600 BCE (the eve of Judah's destruction by Nebuchadnezzar, has been on display for half a century. Its front side has been thoroughly studied, while its back side was considered blank. Our research revealed three lines of text on the supposedly blank side and four "new" lines on the front side. Our results demonstrate the need for multispectral image acquisition for both sides of all ancient ink ostraca. Moreover, in certain cases we recommend employing multispectral techniques for screening newly unearthed ceramic potsherds prior to disposal.

  20. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  1. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    Science.gov (United States)

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  2. Content-based Image Hiding Method for Secure Network Biometric Verification

    Directory of Open Access Journals (Sweden)

    Xiangjiu Che

    2011-08-01

    Full Text Available For secure biometric verification, most existing methods embed biometric information directly into the cover image, but content correlation analysis between the biometric image and the cover image is often ignored. In this paper, we propose a novel biometric image hiding approach based on the content correlation analysis to protect the network-based transmitted image. By using principal component analysis (PCA, the content correlation between the biometric image and the cover image is firstly analyzed. Then based on particle swarm optimization (PSO algorithm, some regions of the cover image are selected to represent the biometric image, in which the cover image can carry partial content of the biometric image. As a result of the correlation analysis, the unrepresented part of the biometric image is embedded into the cover image by using the discrete wavelet transform (DWT. Combined with human visual system (HVS model, this approach makes the hiding result perceptually invisible. The extensive experimental results demonstrate that the proposed hiding approach is robust against some common frequency and geometric attacks; it also provides an effective protection for the secure biometric verification.

  3. Model-based restoration using light vein for range-gated imaging systems.

    Science.gov (United States)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen

    2016-09-10

    The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

  4. Point spread function modeling and image restoration for cone-beam CT

    International Nuclear Information System (INIS)

    Zhang Hua; Shi Yikai; Huang Kuidong; Xu Zhe

    2015-01-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. (authors)

  5. Model-based cartilage thickness measurement in the submillimeter range

    International Nuclear Information System (INIS)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; Wee, R. ter; Venema, H. W.

    2007-01-01

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

  6. Wind Statistics Offshore based on Satellite Images

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Mouche, Alexis; Badger, Merete

    2009-01-01

    -based observations become available. At present preliminary results are obtained using the routine methods. The first step in the process is to retrieve raw SAR data, calibrate the images and use a priori wind direction as input to the geophysical model function. From this process the wind speed maps are produced....... The wind maps are geo-referenced. The second process is the analysis of a series of geo-referenced SAR-based wind maps. Previous research has shown that a relatively large number of images are needed for achieving certain accuracies on mean wind speed, Weibull A and k (scale and shape parameters......Ocean wind maps from satellites are routinely processed both at Risø DTU and CLS based on the European Space Agency Envisat ASAR data. At Risø the a priori wind direction is taken from the atmospheric model NOGAPS (Navel Operational Global Atmospheric Prediction System) provided by the U.S. Navy...

  7. Cardiac CT for planning redo cardiac surgery: effect of knowledge-based iterative model reconstruction on image quality

    International Nuclear Information System (INIS)

    Oda, Seitaro; Weissman, Gaby; Weigold, W. Guy; Vembar, Mani

    2015-01-01

    The purpose of this study was to investigate the effects of knowledge-based iterative model reconstruction (IMR) on image quality in cardiac CT performed for the planning of redo cardiac surgery by comparing IMR images with images reconstructed with filtered back-projection (FBP) and hybrid iterative reconstruction (HIR). We studied 31 patients (23 men, 8 women; mean age 65.1 ± 16.5 years) referred for redo cardiac surgery who underwent cardiac CT. Paired image sets were created using three types of reconstruction: FBP, HIR, and IMR. Quantitative parameters including CT attenuation, image noise, and contrast-to-noise ratio (CNR) of each cardiovascular structure were calculated. The visual image quality - graininess, streak artefact, margin sharpness of each cardiovascular structure, and overall image quality - was scored on a five-point scale. The mean image noise of FBP, HIR, and IMR images was 58.3 ± 26.7, 36.0 ± 12.5, and 14.2 ± 5.5 HU, respectively; there were significant differences in all comparison combinations among the three methods. The CNR of IMR images was better than that of FBP and HIR images in all evaluated structures. The visual scores were significantly higher for IMR than for the other images in all evaluated parameters. IMR can provide significantly improved qualitative and quantitative image quality at in cardiac CT for planning of reoperative cardiac surgery. (orig.)

  8. Biofilm growth program and architecture revealed by single-cell live imaging

    Science.gov (United States)

    Yan, Jing; Sabass, Benedikt; Stone, Howard; Wingreen, Ned; Bassler, Bonnie

    Biofilms are surface-associated bacterial communities. Little is known about biofilm structure at the level of individual cells. We image living, growing Vibrio cholerae biofilms from founder cells to ten thousand cells at single-cell resolution, and discover the forces underpinning the architectural evolution of the biofilm. Mutagenesis, matrix labeling, and simulations demonstrate that surface-adhesion-mediated compression causes V. cholerae biofilms to transition from a two-dimensional branched morphology to a dense, ordered three-dimensional cluster. We discover that directional proliferation of rod-shaped bacteria plays a dominant role in shaping the biofilm architecture, and this growth pattern is controlled by a single gene. Competition analyses reveal the advantages of the dense growth mode in providing the biofilm with superior mechanical properties. We will further present continuum theory to model the three-dimensional growth of biofilms at the solid-liquid interface as well as solid-air interface.

  9. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  10. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  11. High-speed image analysis reveals chaotic vibratory behaviors of pathological vocal folds

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yu, E-mail: yuzhang@xmu.edu.c [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Shao Jun [Shanghai EENT Hospital of Fudan University, Shanghai (China); Krausert, Christopher R. [Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States); Zhang Sai [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Jiang, Jack J. [Shanghai EENT Hospital of Fudan University, Shanghai (China); Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States)

    2011-01-15

    Research highlights: Low-dimensional human glottal area data. Evidence of chaos in human laryngeal activity from high-speed digital imaging. Traditional perturbation analysis should be cautiously applied to aperiodic high speed image signals. Nonlinear dynamic analysis may be helpful for understanding disordered behaviors in pathological laryngeal systems. - Abstract: Laryngeal pathology is usually associated with irregular dynamics of laryngeal activity. High-speed imaging facilitates direct observation and measurement of vocal fold vibrations. However, chaotic dynamic characteristics of aperiodic high-speed image data have not yet been investigated in previous studies. In this paper, we will apply nonlinear dynamic analysis and traditional perturbation methods to quantify high-speed image data from normal subjects and patients with various laryngeal pathologies including vocal fold nodules, polyps, bleeding, and polypoid degeneration. The results reveal the low-dimensional dynamic characteristics of human glottal area data. In comparison to periodic glottal area series from a normal subject, aperiodic glottal area series from pathological subjects show complex reconstructed phase space, fractal dimension, and positive Lyapunov exponents. The estimated positive Lyapunov exponents provide the direct evidence of chaos in pathological human vocal folds from high-speed digital imaging. Furthermore, significant differences between the normal and pathological groups are investigated for nonlinear dynamic and perturbation analyses. Jitter in the pathological group is significantly higher than in the normal group, but shimmer does not show such a difference. This finding suggests that the traditional perturbation analysis should be cautiously applied to high speed image signals. However, the correlation dimension and the maximal Lyapunov exponent reveal a statistically significant difference between normal and pathological groups. Nonlinear dynamic analysis is capable of

  12. High-speed image analysis reveals chaotic vibratory behaviors of pathological vocal folds

    International Nuclear Information System (INIS)

    Zhang Yu; Shao Jun; Krausert, Christopher R.; Zhang Sai; Jiang, Jack J.

    2011-01-01

    Research highlights: → Low-dimensional human glottal area data. → Evidence of chaos in human laryngeal activity from high-speed digital imaging. → Traditional perturbation analysis should be cautiously applied to aperiodic high speed image signals. → Nonlinear dynamic analysis may be helpful for understanding disordered behaviors in pathological laryngeal systems. - Abstract: Laryngeal pathology is usually associated with irregular dynamics of laryngeal activity. High-speed imaging facilitates direct observation and measurement of vocal fold vibrations. However, chaotic dynamic characteristics of aperiodic high-speed image data have not yet been investigated in previous studies. In this paper, we will apply nonlinear dynamic analysis and traditional perturbation methods to quantify high-speed image data from normal subjects and patients with various laryngeal pathologies including vocal fold nodules, polyps, bleeding, and polypoid degeneration. The results reveal the low-dimensional dynamic characteristics of human glottal area data. In comparison to periodic glottal area series from a normal subject, aperiodic glottal area series from pathological subjects show complex reconstructed phase space, fractal dimension, and positive Lyapunov exponents. The estimated positive Lyapunov exponents provide the direct evidence of chaos in pathological human vocal folds from high-speed digital imaging. Furthermore, significant differences between the normal and pathological groups are investigated for nonlinear dynamic and perturbation analyses. Jitter in the pathological group is significantly higher than in the normal group, but shimmer does not show such a difference. This finding suggests that the traditional perturbation analysis should be cautiously applied to high speed image signals. However, the correlation dimension and the maximal Lyapunov exponent reveal a statistically significant difference between normal and pathological groups. Nonlinear dynamic

  13. A distribution-based parametrization for improved tomographic imaging of solute plumes

    Science.gov (United States)

    Pidlisecky, Adam; Singha, K.; Day-Lewis, F. D.

    2011-01-01

    Difference geophysical tomography (e.g. radar, resistivity and seismic) is used increasingly for imaging fluid flow and mass transport associated with natural and engineered hydrologic phenomena, including tracer experiments, in situ remediation and aquifer storage and recovery. Tomographic data are collected over time, inverted and differenced against a background image to produce 'snapshots' revealing changes to the system; these snapshots readily provide qualitative information on the location and morphology of plumes of injected tracer, remedial amendment or stored water. In principle, geometric moments (i.e. total mass, centres of mass, spread, etc.) calculated from difference tomograms can provide further quantitative insight into the rates of advection, dispersion and mass transfer; however, recent work has shown that moments calculated from tomograms are commonly biased, as they are strongly affected by the subjective choice of regularization criteria. Conventional approaches to regularization (Tikhonov) and parametrization (image pixels) result in tomograms which are subject to artefacts such as smearing or pixel estimates taking on the sign opposite to that expected for the plume under study. Here, we demonstrate a novel parametrization for imaging plumes associated with hydrologic phenomena. Capitalizing on the mathematical analogy between moment-based descriptors of plumes and the moment-based parameters of probability distributions, we design an inverse problem that (1) is overdetermined and computationally efficient because the image is described by only a few parameters, (2) produces tomograms consistent with expected plume behaviour (e.g. changes of one sign relative to the background image), (3) yields parameter estimates that are readily interpreted for plume morphology and offer direct insight into hydrologic processes and (4) requires comparatively few data to achieve reasonable model estimates. We demonstrate the approach in a series of

  14. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  15. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    Science.gov (United States)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  16. Integrated Experimental and Model-based Analysis Reveals the Spatial Aspects of EGFR Activation Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Shankaran, Harish; Zhang, Yi; Chrisler, William B.; Ewald, Jonathan A.; Wiley, H. S.; Resat, Haluk

    2012-10-02

    The epidermal growth factor receptor (EGFR) belongs to the ErbB family of receptor tyrosine kinases, and controls a diverse set of cellular responses relevant to development and tumorigenesis. ErbB activation is a complex process involving receptor-ligand binding, receptor dimerization, phosphorylation, and trafficking (internalization, recycling and degradation), which together dictate the spatio-temporal distribution of active receptors within the cell. The ability to predict this distribution, and elucidation of the factors regulating it, would help to establish a mechanistic link between ErbB expression levels and the cellular response. Towards this end, we constructed mathematical models for deconvolving the contributions of receptor dimerization and phosphorylation to EGFR activation, and to examine the dependence of these processes on sub-cellular location. We collected experimental datasets for EGFR activation dynamics in human mammary epithelial cells, with the specific goal of model parameterization, and used the data to estimate parameters for several alternate models. Model-based analysis indicated that: 1) signal termination via receptor dephosphorylation in late endosomes, prior to degradation, is an important component of the response, 2) less than 40% of the receptors in the cell are phosphorylated at any given time, even at saturating ligand doses, and 3) receptor dephosphorylation rates at the cell surface and early endosomes are comparable. We validated the last finding by measuring EGFR dephosphorylation rates at various times following ligand addition both in whole cells, and in endosomes using ELISAs and fluorescent imaging. Overall, our results provide important information on how EGFR phosphorylation levels are regulated within cells. Further, the mathematical model described here can be extended to determine receptor dimer abundances in cells co-expressing various levels of ErbB receptors. This study demonstrates that an iterative cycle of

  17. An Image-Based Finite Element Approach for Simulating Viscoelastic Response of Asphalt Mixture

    Directory of Open Access Journals (Sweden)

    Wenke Huang

    2016-01-01

    Full Text Available This paper presents an image-based micromechanical modeling approach to predict the viscoelastic behavior of asphalt mixture. An improved image analysis technique based on the OTSU thresholding operation was employed to reduce the beam hardening effect in X-ray CT images. We developed a voxel-based 3D digital reconstruction model of asphalt mixture with the CT images after being processed. In this 3D model, the aggregate phase and air void were considered as elastic materials while the asphalt mastic phase was considered as linear viscoelastic material. The viscoelastic constitutive model of asphalt mastic was implemented in a finite element code using the ABAQUS user material subroutine (UMAT. An experimental procedure for determining the parameters of the viscoelastic constitutive model at a given temperature was proposed. To examine the capability of the model and the accuracy of the parameter, comparisons between the numerical predictions and the observed laboratory results of bending and compression tests were conducted. Finally, the verified digital sample of asphalt mixture was used to predict the asphalt mixture viscoelastic behavior under dynamic loading and creep-recovery loading. Simulation results showed that the presented image-based digital sample may be appropriate for predicting the mechanical behavior of asphalt mixture when all the mechanical properties for different phases became available.

  18. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    Science.gov (United States)

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was

  19. Association of Trans-theoretical Model (TTM based Exercise Behavior Change with Body Image Evaluation among Female Iranian Students

    Directory of Open Access Journals (Sweden)

    Sahar Rostami

    2017-03-01

    Full Text Available BackgroundBody image is a determinant of individual attractiveness and physical activity among the young people. This study was aimed to assess the association of Trans-theoretical model based exercise behavior change with body image evaluation among the female Iranian students.Materials and MethodsThis cross-sectional study was conducted in Sanandaj city, Iran in 2016. Using multistage sampling method, a total of 816 high school female students were included in the study. They completed a three-section questionnaire, including demographic information, Trans-theoretical model constructs and body image evaluation. The obtained data were fed into SPSS version 21.0.  ResultsThe results showed more than 60% of participants were in the pre-contemplation and contemplation stages of exercise behavior. The means of perceived self-efficacy, barriers and benefits were found to have a statistically significant difference during the stages of exercise behavior change (P

  20. NETWORK DESIGN IN CLOSE-RANGE PHOTOGRAMMETRY WITH SHORT BASELINE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2017-08-01

    Full Text Available The avaibility of automated software for image-based 3D modelling has changed the way people acquire images for photogrammetric applications. Short baseline images are required to match image points with SIFT-like algorithms, obtaining more images than those necessary for “old fashioned” photogrammetric projects based on manual measurements. This paper describes some considerations on network design for short baseline image sequences, especially on precision and reliability of bundle adjustment. Simulated results reveal that the large number of 3D points used for image orientation has very limited impact on network precision.

  1. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    Science.gov (United States)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  2. Image annotation based on positive-negative instances learning

    Science.gov (United States)

    Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.

  3. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  4. Airflow in Tracheobronchial Tree of Subjects with Tracheal Bronchus Simulated Using CT Image Based Models and CFD Method.

    Science.gov (United States)

    Qi, Shouliang; Zhang, Baihua; Yue, Yong; Shen, Jing; Teng, Yueyang; Qian, Wei; Wu, Jianlin

    2018-03-01

    Tracheal Bronchus (TB) is a rare congenital anomaly characterized by the presence of an abnormal bronchus originating from the trachea or main bronchi and directed toward the upper lobe. The airflow pattern in tracheobronchial trees of TB subjects is critical, but has not been systemically studied. This study proposes to simulate the airflow using CT image based models and the computational fluid dynamics (CFD) method. Six TB subjects and three health controls (HC) are included. After the geometric model of tracheobronchial tree is extracted from CT images, the spatial distribution of velocity, wall pressure, wall shear stress (WSS) is obtained through CFD simulation, and the lobar distribution of air, flow pattern and global pressure drop are investigated. Compared with HC subjects, the main bronchus angle of TB subjects and the variation of volume are large, while the cross-sectional growth rate is small. High airflow velocity, wall pressure, and WSS are observed locally at the tracheal bronchus, but the global patterns of these measures are still similar to those of HC. The ratio of airflow into the tracheal bronchus accounts for 6.6-15.6% of the inhaled airflow, decreasing the ratio to the right upper lobe from 15.7-21.4% (HC) to 4.9-13.6%. The air into tracheal bronchus originates from the right dorsal near-wall region of the trachea. Tracheal bronchus does not change the global pressure drop which is dependent on multiple variables. Though the tracheobronchial trees of TB subjects present individualized features, several commonalities on the structural and airflow characteristics can be revealed. The observed local alternations might provide new insight into the reason of recurrent local infections, cough and acute respiratory distress related to TB.

  5. Color Image Evaluation for Small Space Based on FA and GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2014-01-01

    Full Text Available Aiming at the problem that color image is difficult to quantify, this paper proposes an evaluation method of color image for small space based on factor analysis (FA and gene expression programming (GEP and constructs a correlation model between color image factors and comprehensive color image. The basic color samples of small space and color images are evaluated by semantic differential method (SD method, color image factors are selected via dimension reduction in FA, factor score function is established, and by combining the entropy weight method to determine each factor weights then the comprehensive color image score is calculated finally. The best fitting function between color image factors and comprehensive color image is obtained by GEP algorithm, which can predict the users’ color image values. A color image evaluation system for small space is developed based on this model. The color evaluation of a control room on AC frequency conversion rig is taken as an example, verifying the effectiveness of the proposed method. It also can assist the designers in other color designs and provide a fast evaluation tool for testing users’ color image.

  6. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    Science.gov (United States)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative

  7. DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION

    Directory of Open Access Journals (Sweden)

    B. Ruf

    2017-08-01

    Full Text Available With the emergence of small consumer Unmanned Aerial Vehicles (UAVs, the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM optimization which is parallelized for general purpose computation on a GPU (GPGPU, reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that

  8. Computed Tomography Imaging of a Hip Prosthesis Using Iterative Model-Based Reconstruction and Orthopaedic Metal Artefact Reduction: A Quantitative Analysis.

    Science.gov (United States)

    Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario

    To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.

  9. Illumination compensation in ground based hyperspectral imaging

    Science.gov (United States)

    Wendel, Alexander; Underwood, James

    2017-07-01

    Hyperspectral imaging has emerged as an important tool for analysing vegetation data in agricultural applications. Recently, low altitude and ground based hyperspectral imaging solutions have come to the fore, providing very high resolution data for mapping and studying large areas of crops in detail. However, these platforms introduce a unique set of challenges that need to be overcome to ensure consistent, accurate and timely acquisition of data. One particular problem is dealing with changes in environmental illumination while operating with natural light under cloud cover, which can have considerable effects on spectral shape. In the past this has been commonly achieved by imaging known reference targets at the time of data acquisition, direct measurement of irradiance, or atmospheric modelling. While capturing a reference panel continuously or very frequently allows accurate compensation for illumination changes, this is often not practical with ground based platforms, and impossible in aerial applications. This paper examines the use of an autonomous unmanned ground vehicle (UGV) to gather high resolution hyperspectral imaging data of crops under natural illumination. A process of illumination compensation is performed to extract the inherent reflectance properties of the crops, despite variable illumination. This work adapts a previously developed subspace model approach to reflectance and illumination recovery. Though tested on a ground vehicle in this paper, it is applicable to low altitude unmanned aerial hyperspectral imagery also. The method uses occasional observations of reference panel training data from within the same or other datasets, which enables a practical field protocol that minimises in-field manual labour. This paper tests the new approach, comparing it against traditional methods. Several illumination compensation protocols for high volume ground based data collection are presented based on the results. The findings in this paper are

  10. Seismic Full Waveform Modeling & Imaging in Attenuating Media

    Science.gov (United States)

    Guo, Peng

    Seismic attenuation strongly affects seismic waveforms by amplitude loss and velocity dispersion. Without proper inclusion of Q parameters, errors can be introduced for seismic full waveform modeling and imaging. Three different (Carcione's, Robertsson's, and the generalized Robertsson's) isotropic viscoelastic wave equations based on the generalized standard linear solid (GSLS) are evaluated. The second-order displacement equations are derived, and used to demonstrate that, with the same stress relaxation times, these viscoelastic formulations are equivalent. By introducing separate memory variables for P and S relaxation functions, Robertsson's formulation is generalized to allow different P and S wave stress relaxation times, which improves the physical consistency of the Qp and Qs modelled in the seismograms.The three formulations have comparable computational cost. 3D seismic finite-difference forward modeling is applied to anisotropic viscoelastic media. The viscoelastic T-matrix (a dynamic effective medium theory) relates frequency-dependent anisotropic attenuation and velocity to reservoir properties in fractured HTI media, based on the meso-scale fluid flow attenuation mechanism. The seismic signatures resulting from changing viscoelastic reservoir properties are easily visible. Analysis of 3D viscoelastic seismograms suggests that anisotropic attenuation is a potential tool for reservoir characterization. To compensate the Q effects during reverse-time migration (RTM) in viscoacoustic and viscoelastic media, amplitudes need to be compensated during wave propagation; the propagation velocity of the Q-compensated wavefield needs to be the same as in the attenuating wavefield, to restore the phase information. Both amplitude and phase can be compensated when the velocity dispersion and the amplitude loss are decoupled. For wave equations based on the GSLS, because Q effects are coupled in the memory variables, Q-compensated wavefield propagates faster than

  11. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  12. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  13. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    Science.gov (United States)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  14. Documenting Bronze Age Akrotiri on Thera Using Laser Scanning, Image-Based Modelling and Geophysical Prospection

    Science.gov (United States)

    Trinks, I.; Wallner, M.; Kucera, M.; Verhoeven, G.; Torrejón Valdelomar, J.; Löcker, K.; Nau, E.; Sevara, C.; Aldrian, L.; Neubauer, E.; Klein, M.

    2017-02-01

    The excavated architecture of the exceptional prehistoric site of Akrotiri on the Greek island of Thera/Santorini is endangered by gradual decay, damage due to accidents, and seismic shocks, being located on an active volcano in an earthquake-prone area. Therefore, in 2013 and 2014 a digital documentation project has been conducted with support of the National Geographic Society in order to generate a detailed digital model of Akrotiri's architecture using terrestrial laser scanning and image-based modeling. Additionally, non-invasive geophysical prospection has been tested in order to investigate its potential to explore and map yet buried archaeological remains. This article describes the project and the generated results.

  15. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  16. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  17. Supervised variational model with statistical inference and its application in medical image segmentation.

    Science.gov (United States)

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  18. A Model-Based Approach to Recovering the Structure of a Plant from Images

    KAUST Repository

    Ward, Ben

    2015-03-19

    We present a method for recovering the structure of a plant directly from a small set of widely-spaced images for automated analysis of phenotype. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is composed of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, without manual intervention.

  19. A Model-Based Approach to Recovering the Structure of a Plant from Images

    KAUST Repository

    Ward, Ben; Bastian, John; van den Hengel, Anton; Pooley, Daniel; Bari, Rajendra; Berger, Bettina; Tester, Mark A.

    2015-01-01

    We present a method for recovering the structure of a plant directly from a small set of widely-spaced images for automated analysis of phenotype. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is composed of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, without manual intervention.

  20. Non-model-based correction of respiratory motion using beat-to-beat 3D spiral fat-selective imaging.

    Science.gov (United States)

    Keegan, Jennifer; Gatehouse, Peter D; Yang, Guang-Zhong; Firmin, David N

    2007-09-01

    To demonstrate the feasibility of retrospective beat-to-beat correction of respiratory motion, without the need for a respiratory motion model. A high-resolution three-dimensional (3D) spiral black-blood scan of the right coronary artery (RCA) of six healthy volunteers was acquired over 160 cardiac cycles without respiratory gating. One spiral interleaf was acquired per cardiac cycle, prior to each of which a complete low-resolution fat-selective 3D spiral dataset was acquired. The respiratory motion (3D translation) on each cardiac cycle was determined by cross-correlating a region of interest (ROI) in the fat around the artery in the low-resolution datasets with that on a reference end-expiratory dataset. The measured translations were used to correct the raw data of the high-resolution spiral interleaves. Beat-to-beat correction provided consistently good results, with the image quality being better than that obtained with a fixed superior-inferior tracking factor of 0.6 and better than (N = 5) or equal to (N = 1) that achieved using a subject-specific retrospective 3D translation motion model. Non-model-based correction of respiratory motion using 3D spiral fat-selective imaging is feasible, and in this small group of volunteers produced better-quality images than a subject-specific retrospective 3D translation motion model. (c) 2007 Wiley-Liss, Inc.

  1. [Study on modeling method of total viable count of fresh pork meat based on hyperspectral imaging system].

    Science.gov (United States)

    Wang, Wei; Peng, Yan-Kun; Zhang, Xiao-Li

    2010-02-01

    Once the total viable count (TVC) of bacteria in fresh pork meat exceeds a certain number, it will become pathogenic bacteria. The present paper is to explore the feasibility of hyperspectral imaging technology combined with relevant modeling method for the prediction of TVC in fresh pork meat. For the certain kind of problem that has remarkable nonlinear characteristic and contains few samples, as well as the problem that has large amount of data used to express the information of spectrum and space dimension, it is crucial to choose a logical modeling method in order to achieve good prediction result. Based on the comparative result of partial least-squares regression (PLSR), artificial neural networks (ANNs) and least square support vector machines (LS-SVM), the authors found that the PLSR method was helpless for nonlinear regression problem, and the ANNs method couldn't get approving prediction result for few samples problem, however the prediction models based on LS-SVM can give attention to the little training error and the favorable generalization ability as soon as possible, and can make them well synchronously. Therefore LS-SVM was adopted as the modeling method to predict the TVC of pork meat. Then the TVC prediction model was constructed using all the 512 wavelength data acquired by the hyperspectral imaging system. The determination coefficient between the TVC obtained with the standard plate count for bacterial colonies method and the LS-SVM prediction result was 0.987 2 and 0.942 6 for the samples of calibration set and prediction set respectively, also the root mean square error of calibration (RMSEC) and the root mean square error of prediction (RMSEP) was 0.207 1 and 0.217 6 individually, and the result was considerably better than that of MLR, PLSR and ANNs method. This research demonstrates that using the hyperspectral imaging system coupled with the LS-SVM modeling method is a valid means for quick and nondestructive determination of TVC of pork

  2. Image-based in vivo assessment of targeting accuracy of stereotactic brain surgery in experimental rodent models

    Science.gov (United States)

    Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; de Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik

    2016-11-01

    Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study.

  3. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  4. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  5. FDTD based model of ISOCT imaging for validation of nanoscale sensitivity (Conference Presentation)

    Science.gov (United States)

    Eid, Aya; Zhang, Di; Yi, Ji; Backman, Vadim

    2017-02-01

    Many of the earliest structural changes associated with neoplasia occur on the micro and nanometer scale, and thus appear histologically normal. Our group has established Inverse Spectroscopic OCT (ISOCT), a spectral based technique to extract nanoscale sensitive metrics derived from the OCT signal. Thus, there is a need to model light transport through relatively large volumes (< 50 um^3) of media with nanoscale level resolution. Finite Difference Time Domain (FDTD) is an iterative approach which directly solves Maxwell's equations to robustly estimate the electric and magnetic fields propagating through a sample. The sample's refractive index for every spatial voxel and wavelength are specified upon a grid with voxel sizes on the order of λ/20, making it an ideal modelling technique for nanoscale structure analysis. Here, we utilize the FDTD technique to validate the nanoscale sensing ability of ISOCT. The use of FDTD for OCT modelling requires three components: calculating the source beam as it propagates through the optical system, computing the sample's scattered field using FDTD, and finally propagating the scattered field back through the optical system. The principles of Fourier optics are employed to focus this interference field through a 4f optical system and onto the detector. Three-dimensional numerical samples are generated from a given refractive index correlation function with known parameters, and subsequent OCT images and mass density correlation function metrics are computed. We show that while the resolvability of the OCT image remains diffraction limited, spectral analysis allows nanoscale sensitive metrics to be extracted.

  6. 3D MODEL GENERATION USING OBLIQUE IMAGES ACQUIRED BY UAV

    Directory of Open Access Journals (Sweden)

    A. Lingua

    2017-07-01

    Full Text Available In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (including façades and building footprints. Here the acquisition and use of oblique images from a low cost and open source Unmanned Aerial Vehicle (UAV for the 3D high-level-of-detail reconstruction of historical architectures is evaluated. The critical issues of such acquisitions (flight planning strategies, ground control points distribution, etc. are described. Several problems should be considered in the flight planning: best approach to cover the whole object with the minimum time of flight; visibility of vertical structures; occlusions due to the context; acquisition of all the parts of the objects (the closest and the farthest with similar resolution; suitable camera inclination, and so on. In this paper a solution is proposed in order to acquire oblique images with one only flight. The data processing was realized using Structure-from-Motion-based approach for point cloud generation using dense image-matching algorithms implemented in an open source software. The achieved results are analysed considering some check points and some reference LiDAR data. The system was tested for surveying a historical architectonical complex: the “Sacro Mo nte di Varallo Sesia” in north-west of Italy. This study demonstrates that the use of oblique images acquired from a low cost UAV system and processed through an open source software is an effective methodology to survey cultural heritage, characterized by limited accessibility, need for detail and rapidity of the acquisition phase, and often reduced budgets.

  7. Fuzzy object models for newborn brain MR image segmentation

    Science.gov (United States)

    Kobashi, Syoji; Udupa, Jayaram K.

    2013-03-01

    Newborn brain MR image segmentation is a challenging problem because of variety of size, shape and MR signal although it is the fundamental study for quantitative radiology in brain MR images. Because of the large difference between the adult brain and the newborn brain, it is difficult to directly apply the conventional methods for the newborn brain. Inspired by the original fuzzy object model introduced by Udupa et al. at SPIE Medical Imaging 2011, called fuzzy shape object model (FSOM) here, this paper introduces fuzzy intensity object model (FIOM), and proposes a new image segmentation method which combines the FSOM and FIOM into fuzzy connected (FC) image segmentation. The fuzzy object models are built from training datasets in which the cerebral parenchyma is delineated by experts. After registering FSOM with the evaluating image, the proposed method roughly recognizes the cerebral parenchyma region based on a prior knowledge of location, shape, and the MR signal given by the registered FSOM and FIOM. Then, FC image segmentation delineates the cerebral parenchyma using the fuzzy object models. The proposed method has been evaluated using 9 newborn brain MR images using the leave-one-out strategy. The revised age was between -1 and 2 months. Quantitative evaluation using false positive volume fraction (FPVF) and false negative volume fraction (FNVF) has been conducted. Using the evaluation data, a FPVF of 0.75% and FNVF of 3.75% were achieved. More data collection and testing are underway.

  8. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    Science.gov (United States)

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  9. Characterization of lens based photoacoustic imaging system

    Directory of Open Access Journals (Sweden)

    Kalloor Joseph Francis

    2017-12-01

    Full Text Available Some of the challenges in translating photoacoustic (PA imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF. Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  10. Characterization of lens based photoacoustic imaging system.

    Science.gov (United States)

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  11. Toward efficient biomechanical-based deformable image registration of lungs for image-guided radiotherapy

    Science.gov (United States)

    Al-Mayah, Adil; Moseley, Joanne; Velec, Mike; Brock, Kristy

    2011-08-01

    Both accuracy and efficiency are critical for the implementation of biomechanical model-based deformable registration in clinical practice. The focus of this investigation is to evaluate the potential of improving the efficiency of the deformable image registration of the human lungs without loss of accuracy. Three-dimensional finite element models have been developed using image data of 14 lung cancer patients. Each model consists of two lungs, tumor and external body. Sliding of the lungs inside the chest cavity is modeled using a frictionless surface-based contact model. The effect of the type of element, finite deformation and elasticity on the accuracy and computing time is investigated. Linear and quadrilateral tetrahedral elements are used with linear and nonlinear geometric analysis. Two types of material properties are applied namely: elastic and hyperelastic. The accuracy of each of the four models is examined using a number of anatomical landmarks representing the vessels bifurcation points distributed across the lungs. The registration error is not significantly affected by the element type or linearity of analysis, with an average vector error of around 2.8 mm. The displacement differences between linear and nonlinear analysis methods are calculated for all lungs nodes and a maximum value of 3.6 mm is found in one of the nodes near the entrance of the bronchial tree into the lungs. The 95 percentile of displacement difference ranges between 0.4 and 0.8 mm. However, the time required for the analysis is reduced from 95 min in the quadratic elements nonlinear geometry model to 3.4 min in the linear element linear geometry model. Therefore using linear tetrahedral elements with linear elastic materials and linear geometry is preferable for modeling the breathing motion of lungs for image-guided radiotherapy applications.

  12. Angle-independent measure of motion for image-based gating in 3D coronary angiography

    International Nuclear Information System (INIS)

    Lehmann, Glen C.; Holdsworth, David W.; Drangova, Maria

    2006-01-01

    compared to an ECG-based gating strategy in a porcine model. The image-based gating strategy selected 61 projection images, compared to 45 selected by the ECG-gating strategy. Qualitative comparison revealed that although both the SIC-based and ECG-gated reconstructions decreased motion artifact compared to reconstruction with no gating, the SIC-based gating technique increased the conspicuity of smaller vessels when compared to ECG gating in maximum intensity projections of the reconstructions and increased the sharpness of a vessel cross section in multi-planar reformats of the reconstruction

  13. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  14. Evidence-based cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Shinagare, Atul B.; Khorasani, Ramin [Dept. of Radiology, Brigham and Women' s Hospital, Boston (Korea, Republic of)

    2017-01-15

    With the advances in the field of oncology, imaging is increasingly used in the follow-up of cancer patients, leading to concerns about over-utilization. Therefore, it has become imperative to make imaging more evidence-based, efficient, cost-effective and equitable. This review explores the strategies and tools to make diagnostic imaging more evidence-based, mainly in the context of follow-up of cancer patients.

  15. MR-based imaging of neural stem cells

    Energy Technology Data Exchange (ETDEWEB)

    Politi, Letterio S. [San Raffaele Scientific Institute, Neuroradiology Department, Milano (Italy)

    2007-06-15

    The efficacy of therapies based on neural stem cells (NSC) has been demonstrated in preclinical models of several central nervous system (CNS) diseases. Before any potential human application of such promising therapies can be envisaged, there are some important issues that need to be solved. The most relevant one is the requirement for a noninvasive technique capable of monitoring NSC delivery, homing to target sites and trafficking. Knowledge of the location and temporospatial migration of either transplanted or genetically modified NSC is of the utmost importance in analyzing mechanisms of correction and cell distribution. Further, such a technique may represent a crucial step toward clinical application of NSC-based approaches in humans, for both designing successful protocols and monitoring their outcome. Among the diverse imaging approaches available for noninvasive cell tracking, such as nuclear medicine techniques, fluorescence and bioluminescence, magnetic resonance imaging (MRI) has unique advantages. Its high temporospatial resolution, high sensitivity and specificity render MRI one of the most promising imaging modalities available, since it allows dynamic visualization of migration of transplanted cells in animal models and patients during clinically useful time periods. Different cellular and molecular labeling approaches for MRI depiction of NSC are described and discussed in this review, as well as the most relevant issues to be considered in optimizing molecular imaging techniques for clinical application. (orig.)

  16. MR-based imaging of neural stem cells

    International Nuclear Information System (INIS)

    Politi, Letterio S.

    2007-01-01

    The efficacy of therapies based on neural stem cells (NSC) has been demonstrated in preclinical models of several central nervous system (CNS) diseases. Before any potential human application of such promising therapies can be envisaged, there are some important issues that need to be solved. The most relevant one is the requirement for a noninvasive technique capable of monitoring NSC delivery, homing to target sites and trafficking. Knowledge of the location and temporospatial migration of either transplanted or genetically modified NSC is of the utmost importance in analyzing mechanisms of correction and cell distribution. Further, such a technique may represent a crucial step toward clinical application of NSC-based approaches in humans, for both designing successful protocols and monitoring their outcome. Among the diverse imaging approaches available for noninvasive cell tracking, such as nuclear medicine techniques, fluorescence and bioluminescence, magnetic resonance imaging (MRI) has unique advantages. Its high temporospatial resolution, high sensitivity and specificity render MRI one of the most promising imaging modalities available, since it allows dynamic visualization of migration of transplanted cells in animal models and patients during clinically useful time periods. Different cellular and molecular labeling approaches for MRI depiction of NSC are described and discussed in this review, as well as the most relevant issues to be considered in optimizing molecular imaging techniques for clinical application. (orig.)

  17. Imaged-Based Visual Servo Control for a VTOL Aircraft

    Directory of Open Access Journals (Sweden)

    Liying Zou

    2017-01-01

    Full Text Available This paper presents a novel control strategy to force a vertical take-off and landing (VTOL aircraft to accomplish the pinpoint landing task. The control development is based on the image-based visual servoing method and the back-stepping technique; its design differs from the existing methods because the controller maps the image errors onto the actuator space via a visual model which does not contain the depth information of the feature point. The novelty of the proposed method is to extend the image-based visual servoing technique to the VTOL aircraft control. In addition, the Lyapunov theory is used to prove the asymptotic stability of the VTOL aircraft visual servoing system, while the image error can converge to zero. Furthermore, simulations have been also conducted to demonstrate the performances of the proposed method.

  18. G0-WISHART Distribution Based Classification from Polarimetric SAR Images

    Science.gov (United States)

    Hu, G. C.; Zhao, Q. H.

    2017-09-01

    Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.

  19. A model of primate visual cortex based on category-specific redundancies in natural images

    Science.gov (United States)

    Malmir, Mohsen; Shiry Ghidary, S.

    2010-12-01

    Neurophysiological and computational studies have proposed that properties of natural images have a prominent role in shaping selectivity of neurons in the visual cortex. An important property of natural images that has been studied extensively is the inherent redundancy in these images. In this paper, the concept of category-specific redundancies is introduced to describe the complex pattern of dependencies between responses of linear filters to natural images. It is proposed that structural similarities between images of different object categories result in dependencies between responses of linear filters in different spatial scales. It is also proposed that the brain gradually removes these dependencies in different areas of the ventral visual hierarchy to provide a more efficient representation of its sensory input. The authors proposed a model to remove these redundancies and trained it with a set of natural images using general learning rules that are developed to remove dependencies between responses of neighbouring neurons. Results of experiments demonstrate the close resemblance of neuronal selectivity between different layers of the model and their corresponding visual areas.

  20. Image Encryption Scheme Based on Balanced Two-Dimensional Cellular Automata

    Directory of Open Access Journals (Sweden)

    Xiaoyan Zhang

    2013-01-01

    Full Text Available Cellular automata (CA are simple models of computation which exhibit fascinatingly complex behavior. Due to the universality of CA model, it has been widely applied in traditional cryptography and image processing. The aim of this paper is to present a new image encryption scheme based on balanced two-dimensional cellular automata. In this scheme, a random image with the same size of the plain image to be encrypted is first generated by a pseudo-random number generator with a seed. Then, the random image is evoluted alternately with two balanced two-dimensional CA rules. At last, the cipher image is obtained by operating bitwise XOR on the final evolution image and the plain image. This proposed scheme possesses some advantages such as very large key space, high randomness, complex cryptographic structure, and pretty fast encryption/decryption speed. Simulation results obtained from some classical images at the USC-SIPI database demonstrate the strong performance of the proposed image encryption scheme.

  1. Adaptive wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image

    International Nuclear Information System (INIS)

    Tabuchi, Motohiro; Yamane, Nobumoto; Morikawa, Yoshitaka

    2008-01-01

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise. (author)

  2. Practical considerations for image-based PSF and blobs reconstruction in PET

    International Nuclear Information System (INIS)

    Stute, Simon; Comtat, Claude

    2013-01-01

    Iterative reconstructions in positron emission tomography (PET) need a model relating the recorded data to the object/patient being imaged, called the system matrix (SM). The more realistic this model, the better the spatial resolution in the reconstructed images. However, a serious concern when using a SM that accurately models the resolution properties of the PET system is the undesirable edge artefact, visible through oscillations near sharp discontinuities in the reconstructed images. This artefact is a natural consequence of solving an ill-conditioned inverse problem, where the recorded data are band-limited. In this paper, we focus on practical aspects when considering image-based point-spread function (PSF) reconstructions. To remove the edge artefact, we propose to use a particular case of the method of sieves (Grenander 1981 Abstract Inference New York: Wiley), which simply consists in performing a standard PSF reconstruction, followed by a post-smoothing using the PSF as the convolution kernel. Using analytical simulations, we investigate the impact of different reconstruction and PSF modelling parameters on the edge artefact and its suppression, in the case of noise-free data and an exactly known PSF. Using Monte-Carlo simulations, we assess the proposed method of sieves with respect to the choice of the geometric projector and the PSF model used in the reconstruction. When the PSF model is accurately known, we show that the proposed method of sieves succeeds in completely suppressing the edge artefact, though after a number of iterations higher than typically used in practice. When applying the method to realistic data (i.e. unknown true SM and noisy data), we show that the choice of the geometric projector and the PSF model does not impact the results in terms of noise and contrast recovery, as long as the PSF has a width close to the true PSF one. Equivalent results were obtained using either blobs or voxels in the same conditions (i.e. the blob

  3. A new pattern associative memory model for image recognition based on Hebb rules and dot product

    Science.gov (United States)

    Gao, Mingyue; Deng, Limiao; Wang, Yanjiang

    2018-04-01

    A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.

  4. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi

    2017-08-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.

  5. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    Science.gov (United States)

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  6. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  7. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    Science.gov (United States)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  8. Brain involvement in patients with inflammatory bowel disease: a voxel-based morphometry and diffusion tensor imaging study.

    Science.gov (United States)

    Zikou, Anastasia K; Kosmidou, Maria; Astrakas, Loukas G; Tzarouchi, Loukia C; Tsianos, Epameinondas; Argyropoulou, Maria I

    2014-10-01

    To investigate structural brain changes in inflammatory bowel disease (IBD). Brain magnetic resonance imaging (MRI) was performed on 18 IBD patients (aged 45.16 ± 14.71 years) and 20 aged-matched control subjects. The imaging protocol consisted of a sagittal-FLAIR, a T1-weighted high-resolution three-dimensional spoiled gradient-echo sequence, and a multisession spin-echo echo-planar diffusion-weighted sequence. Differences between patients and controls in brain volume and diffusion indices were evaluated using the voxel-based morphometry (VBM) and tract-based spatial statistics (TBSS) methods, respectively. The presence of white-matter hyperintensities (WMHIs) was evaluated on FLAIR images. VBM revealed decreased grey matter (GM) volume in patients in the fusiform and the inferior temporal gyrus bilaterally, the right precentral gyrus, the right supplementary motor area, the right middle frontal gyrus and the left superior parietal gyrus (p tensor imaging detects microstructural brain abnormalities in IBD. • Voxel based morphometry reveals brain atrophy in IBD.

  9. QR code based noise-free optical encryption and decryption of a gray scale image

    Science.gov (United States)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  10. Imaging infrared: Scene simulation, modeling, and real image tracking; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Triplett, Milton J.; Wolverton, James R.; Hubert, August J.

    1989-09-01

    Various papers on scene simulation, modeling, and real image tracking using IR imaging are presented. Individual topics addressed include: tactical IR scene generator, dynamic FLIR simulation in flight training research, high-speed dynamic scene simulation in UV to IR spectra, development of an IR sensor calibration facility, IR celestial background scene description, transmission measurement of optical components at cryogenic temperatures, diffraction model for a point-source generator, silhouette-based tracking for tactical IR systems, use of knowledge in electrooptical trackers, detection and classification of target formations in IR image sequences, SMPRAD: simplified three-dimensional cloud radiance model, IR target generator, recent advances in testing of thermal imagers, generic IR system models with dynamic image generation, modeling realistic target acquisition using IR sensors in multiple-observer scenarios, and novel concept of scene generation and comprehensive dynamic sensor test.

  11. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  12. Superpixel-based classification of gastric chromoendoscopy images

    Science.gov (United States)

    Boschetto, Davide; Grisan, Enrico

    2017-03-01

    Chromoendoscopy (CH) is a gastroenterology imaging modality that involves the staining of tissues with methylene blue, which reacts with the internal walls of the gastrointestinal tract, improving the visual contrast in mucosal surfaces and thus enhancing a doctor's ability to screen precancerous lesions or early cancer. This technique helps identify areas that can be targeted for biopsy or treatment and in this work we will focus on gastric cancer detection. Gastric chromoendoscopy for cancer detection has several taxonomies available, one of which classifies CH images into three classes (normal, metaplasia, dysplasia) based on color, shape and regularity of pit patterns. Computer-assisted diagnosis is desirable to help us improve the reliability of the tissue classification and abnormalities detection. However, traditional computer vision methodologies, mainly segmentation, do not translate well to the specific visual characteristics of a gastroenterology imaging scenario. We propose the exploitation of a first unsupervised segmentation via superpixel, which groups pixels into perceptually meaningful atomic regions, used to replace the rigid structure of the pixel grid. For each superpixel, a set of features is extracted and then fed to a random forest based classifier, which computes a model used to predict the class of each superpixel. The average general accuracy of our model is 92.05% in the pixel domain (86.62% in the superpixel domain), while detection accuracies on the normal and abnormal class are respectively 85.71% and 95%. Eventually, the whole image class can be predicted image through a majority vote on each superpixel's predicted class.

  13. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  14. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J.

    2010-01-01

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  15. Multi-Modal Imaging in a Mouse Model of Orthotopic Lung Cancer.

    Science.gov (United States)

    Patel, Priya; Kato, Tatsuya; Ujiie, Hideki; Wada, Hironobu; Lee, Daiyoon; Hu, Hsin-Pei; Hirohashi, Kentaro; Ahn, Jin Young; Zheng, Jinzi; Yasufuku, Kazuhiro

    2016-01-01

    Investigation of CF800, a novel PEGylated nano-liposomal imaging agent containing indocyanine green (ICG) and iohexol, for real-time near infrared (NIR) fluorescence and computed tomography (CT) image-guided surgery in an orthotopic lung cancer model in nude mice. CF800 was intravenously administered into 13 mice bearing the H460 orthotopic human lung cancer. At 48 h post-injection (peak imaging agent accumulation time point), ex vivo NIR and CT imaging was performed. A clinical NIR imaging system (SPY®, Novadaq) was used to measure fluorescence intensity of tumor and lung. Tumor-to-background-ratios (TBR) were calculated in inflated and deflated states. The mean Hounsfield unit (HU) of lung tumor was quantified using the CT data set and a semi-automated threshold-based method. Histological evaluation using H&E, the macrophage marker F4/80 and the endothelial cell marker CD31, was performed, and compared to the liposomal fluorescence signal obtained from adjacent tissue sections. The fluorescence TBR measured when the lung is in the inflated state (2.0 ± 0.58) was significantly greater than in the deflated state (1.42 ± 0.380 (n = 7, p<0.003). Mean fluorescent signal in tumor was highly variable across samples, (49.0 ± 18.8 AU). CT image analysis revealed greater contrast enhancement in lung tumors (a mean increase of 110 ± 57 HU) when CF800 is administered compared to the no contrast enhanced tumors (p = 0.0002). Preliminary data suggests that the high fluorescence TBR and CT tumor contrast enhancement provided by CF800 may have clinical utility in localization of lung cancer during CT and NIR image-guided surgery.

  16. Fuzzy modeling of electrical impedance tomography images of the lungs

    International Nuclear Information System (INIS)

    Tanaka, Harki; Ortega, Neli Regina Siqueira; Galizia, Mauricio Stanzione; Borges, Joao Batista; Amato, Marcelo Britto Passos

    2008-01-01

    Objectives: Aiming to improve the anatomical resolution of electrical impedance tomography images, we developed a fuzzy model based on electrical impedance tomography's high temporal resolution and on the functional pulmonary signals of perfusion and ventilation. Introduction: Electrical impedance tomography images carry information about both ventilation and perfusion. However, these images are difficult to interpret because of insufficient anatomical resolution, such that it becomes almost impossible to distinguish the heart from the lungs. Methods: Electrical impedance tomography data from an experimental animal model were collected during normal ventilation and apnoea while an injection of hypertonic saline was administered. The fuzzy model was elaborated in three parts: a modeling of the heart, the pulmonary ventilation map and the pulmonary perfusion map. Image segmentation was performed using a threshold method, and a ventilation/perfusion map was generated. Results: Electrical impedance tomography images treated by the fuzzy model were compared with the hypertonic saline injection method and computed tomography scan images, presenting good results. The average accuracy index was 0.80 when comparing the fuzzy modeled lung maps and the computed tomography scan lung mask. The average ROC curve area comparing a saline injection image and a fuzzy modeled pulmonary perfusion image was 0.77. Discussion: The innovative aspects of our work are the use of temporal information for the delineation of the heart structure and the use of two pulmonary functions for lung structure delineation. However, robustness of the method should be tested for the imaging of abnormal lung conditions. Conclusions: These results showed the adequacy of the fuzzy approach in treating the anatomical resolution uncertainties in electrical impedance tomography images. (author)

  17. Voxel-based MRI intensitometry reveals extent of cerebral white matter pathology in amyotrophic lateral sclerosis.

    Directory of Open Access Journals (Sweden)

    Viktor Hartung

    Full Text Available Amyotrophic lateral sclerosis (ALS is characterized by progressive loss of upper and lower motor neurons. Advanced MRI techniques such as diffusion tensor imaging have shown great potential in capturing a common white matter pathology. However the sensitivity is variable and diffusion tensor imaging is not yet applicable to the routine clinical environment. Voxel-based morphometry (VBM has revealed grey matter changes in ALS, but the bias-reducing algorithms inherent to traditional VBM are not optimized for the assessment of the white matter changes. We have developed a novel approach to white matter analysis, namely voxel-based intensitometry (VBI. High resolution T1-weighted MRI was acquired at 1.5 Tesla in 30 ALS patients and 37 age-matched healthy controls. VBI analysis at the group level revealed widespread white matter intensity increases in the corticospinal tracts, corpus callosum, sub-central, frontal and occipital white matter tracts and cerebellum. VBI results correlated with disease severity (ALSFRS-R and patterns of cerebral involvement differed between bulbar- and limb-onset. VBI would be easily translatable to the routine clinical environment, and once optimized for individual analysis offers significant biomarker potential in ALS.

  18. Optimization of an Image-Based Talking Head System

    Directory of Open Access Journals (Sweden)

    Kang Liu

    2009-01-01

    Full Text Available This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a personalized 3D mask as well as a large database of mouth images and their related information. The synthesis part generates natural looking facial animations from phonetic transcripts of text. A critical issue of the synthesis is the unit selection which selects and concatenates these appropriate mouth images from the database such that they match the spoken words of the talking head. Selection is based on lip synchronization and the similarity of consecutive images. The unit selection is refined in this paper, and Pareto optimization is used to train the unit selection. Experimental results of subjective tests show that most people cannot distinguish our facial animations from real videos.

  19. Learning-based deformable image registration for infant MR images in the first year of life.

    Science.gov (United States)

    Hu, Shunbo; Wei, Lifang; Gao, Yaozong; Guo, Yanrong; Wu, Guorong; Shen, Dinggang

    2017-01-01

    Many brain development studies have been devoted to investigate dynamic structural and functional changes in the first year of life. To quantitatively measure brain development in such a dynamic period, accurate image registration for different infant subjects with possible large age gap is of high demand. Although many state-of-the-art image registration methods have been proposed for young and elderly brain images, very few registration methods work for infant brain images acquired in the first year of life, because of (a) large anatomical changes due to fast brain development and (b) dynamic appearance changes due to white-matter myelination. To address these two difficulties, we propose a learning-based registration method to not only align the anatomical structures but also alleviate the appearance differences between two arbitrary infant MR images (with large age gap) by leveraging the regression forest to predict both the initial displacement vector and appearance changes. Specifically, in the training stage, two regression models are trained separately, with (a) one model learning the relationship between local image appearance (of one development phase) and its displacement toward the template (of another development phase) and (b) another model learning the local appearance changes between the two brain development phases. Then, in the testing stage, to register a new infant image to the template, we first predict both its voxel-wise displacement and appearance changes by the two learned regression models. Since such initializations can alleviate significant appearance and shape differences between new infant image and the template, it is easy to just use a conventional registration method to refine the remaining registration. We apply our proposed registration method to align 24 infant subjects at five different time points (i.e., 2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old), and achieve more accurate and robust registration

  20. Knowledge-based iterative model reconstruction: comparative image quality and radiation dose with a pediatric computed tomography phantom

    International Nuclear Information System (INIS)

    Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One; Ha, Seongmin

    2016-01-01

    CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose 4 , levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose 4 levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose 4 level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose 4 obtained at 1.81 mSv. (orig.)

  1. Knowledge-based iterative model reconstruction: comparative image quality and radiation dose with a pediatric computed tomography phantom.

    Science.gov (United States)

    Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One

    2016-03-01

    CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.

  2. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  3. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  4. Fluorescence based molecular in vivo imaging

    International Nuclear Information System (INIS)

    Ebert, Bernd

    2008-01-01

    Molecular imaging represents a modern research area that allows the in vivo study of molecular biological process kinetics using appropriate probes and visualization methods. This methodology may be defined- apart from the contrast media injection - as non-abrasive. In order to reach an in vivo molecular process imaging as accurate as possible the effects of the used probes on the biological should not be too large. The contrast media as important part of the molecular imaging can significantly contribute to the understanding of molecular processes and to the development of tailored diagnostics and therapy. Since more than 15 years PTB is developing optic imaging systems that may be used for fluorescence based visualization of tissue phantoms, small animal models and the localization of tumors and their predecessors, and for the early recognition of inflammatory processes in clinical trials. Cellular changes occur during many diseases, thus the molecular imaging might be of importance for the early diagnosis of chronic inflammatory diseases. Fluorescent dyes can be used as unspecific or also as specific contrast media, which allow enhanced detection sensitivity

  5. [18F]DPA 714 PET Imaging Reveals Global Neuroinflammation in Zika Virus Infected Mice

    Science.gov (United States)

    2017-09-12

    with neurotropic viruses and the evaluation of therapeutics being developed for treatment of infectious diseases. Keywords: Zika virus , Animal...18F]DPA-714 PET Imaging Reveals Global Neuroinflammation in Zika Virus - Infected Mice Kyle Kuszpit1†, Bradley S. Hollidge2†, Xiankun Zeng3, Robert...Running Head: PET Imaging of Zika Virus -Induced Neuroinflammation Manuscript Category: Article Affiliations: 1Molecular and Translational

  6. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    Science.gov (United States)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  7. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  8. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    Science.gov (United States)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  9. Tracking boundary movement and exterior shape modelling in lung EIT imaging

    International Nuclear Information System (INIS)

    Biguri, A; Soleimani, M; Grychtol, B; Adler, A

    2015-01-01

    Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT. (paper)

  10. Tracking boundary movement and exterior shape modelling in lung EIT imaging.

    Science.gov (United States)

    Biguri, A; Grychtol, B; Adler, A; Soleimani, M

    2015-06-01

    Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT.

  11. Experimental Component Characterization, Monte-Carlo-Based Image Generation and Source Reconstruction for the Neutron Imaging System of the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, C A; Moran, M J

    2007-08-21

    The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS

  12. Spatiotemporal processing of gated cardiac SPECT images using deformable mesh modeling

    International Nuclear Information System (INIS)

    Brankov, Jovan G.; Yang Yongyi; Wernick, Miles N.

    2005-01-01

    In this paper we present a spatiotemporal processing approach, based on deformable mesh modeling, for noise reduction in gated cardiac single-photon emission computed tomography images. Because of the partial volume effect (PVE), clinical cardiac-gated perfusion images exhibit a phenomenon known as brightening--the myocardium appears to become brighter as the heart wall thickens. Although brightening is an artifact, it serves as an important diagnostic feature for assessment of wall thickening in clinical practice. Our proposed processing algorithm aims to preserve this important diagnostic feature while reducing the noise level in the images. The proposed algorithm is based on the use of a deformable mesh for modeling the cardiac motion in a gated cardiac sequence, based on which the images are processed by smoothing along space-time trajectories of object points while taking into account the PVE. Our experiments demonstrate that the proposed algorithm can yield significantly more-accurate results than several existing methods

  13. IMAGE SEGMENTATION BASED ON MARKOV RANDOM FIELD AND WATERSHED TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    This paper presented a method that incorporates Markov Random Field(MRF), watershed segmentation and merging techniques for performing image segmentation and edge detection tasks. MRF is used to obtain an initial estimate of x regions in the image under process where in MRF model, gray level x, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The process needs an initial segmented result. An initial segmentation is got based on K-means clustering technique and the minimum distance, then the region process in modeled by MRF to obtain an image contains different intensity regions. Starting from this we calculate the gradient values of that image and then employ a watershed technique. When using MRF method it obtains an image that has different intensity regions and has all the edge and region information, then it improves the segmentation result by superimpose closed and an accurate boundary of each region using watershed algorithm. After all pixels of the segmented regions have been processed, a map of primitive region with edges is generated. Finally, a merge process based on averaged mean values is employed. The final segmentation and edge detection result is one closed boundary per actual region in the image.

  14. Biomechanical modeling constrained surface-based image registration for prostate MR guided TRUS biopsy

    NARCIS (Netherlands)

    Ven, W.J.M. van de; Hu, Y.; Barentsz, J.O.; Karssemeijer, N.; Barratt, D.; Huisman, H.J.

    2015-01-01

    Adding magnetic resonance (MR)-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound (US) by using MR-US registration. A common approach is to use surface-based

  15. Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma

    International Nuclear Information System (INIS)

    Grossmann, Patrick; Gutman, David A.; Dunn, William D. Jr; Holder, Chad A.; Aerts, Hugo J. W. L.

    2016-01-01

    Glioblastoma (GBM) tumors exhibit strong phenotypic differences that can be quantified using magnetic resonance imaging (MRI), but the underlying biological drivers of these imaging phenotypes remain largely unknown. An Imaging-Genomics analysis was performed to reveal the mechanistic associations between MRI derived quantitative volumetric tumor phenotype features and molecular pathways. One hundred fourty one patients with presurgery MRI and survival data were included in our analysis. Volumetric features were defined, including the necrotic core (NE), contrast-enhancement (CE), abnormal tumor volume assessed by post-contrast T1w (tumor bulk or TB), tumor-associated edema based on T2-FLAIR (ED), and total tumor volume (TV), as well as ratios of these tumor components. Based on gene expression where available (n = 91), pathway associations were assessed using a preranked gene set enrichment analysis. These results were put into context of molecular subtypes in GBM and prognostication. Volumetric features were significantly associated with diverse sets of biological processes (FDR < 0.05). While NE and TB were enriched for immune response pathways and apoptosis, CE was associated with signal transduction and protein folding processes. ED was mainly enriched for homeostasis and cell cycling pathways. ED was also the strongest predictor of molecular GBM subtypes (AUC = 0.61). CE was the strongest predictor of overall survival (C-index = 0.6; Noether test, p = 4x10 −4 ). GBM volumetric features extracted from MRI are significantly enriched for information about the biological state of a tumor that impacts patient outcomes. Clinical decision-support systems could exploit this information to develop personalized treatment strategies on the basis of noninvasive imaging. The online version of this article (doi:10.1186/s12885-016-2659-5) contains supplementary material, which is available to authorized users

  16. Anomaly detection for medical images based on a one-class classification

    Science.gov (United States)

    Wei, Qi; Ren, Yinhao; Hou, Rui; Shi, Bibo; Lo, Joseph Y.; Carin, Lawrence

    2018-02-01

    Detecting an anomaly such as a malignant tumor or a nodule from medical images including mammogram, CT or PET images is still an ongoing research problem drawing a lot of attention with applications in medical diagnosis. A conventional way to address this is to learn a discriminative model using training datasets of negative and positive samples. The learned model can be used to classify a testing sample into a positive or negative class. However, in medical applications, the high unbalance between negative and positive samples poses a difficulty for learning algorithms, as they will be biased towards the majority group, i.e., the negative one. To address this imbalanced data issue as well as leverage the huge amount of negative samples, i.e., normal medical images, we propose to learn an unsupervised model to characterize the negative class. To make the learned model more flexible and extendable for medical images of different scales, we have designed an autoencoder based on a deep neural network to characterize the negative patches decomposed from large medical images. A testing image is decomposed into patches and then fed into the learned autoencoder to reconstruct these patches themselves. The reconstruction error of one patch is used to classify this patch into a binary class, i.e., a positive or a negative one, leading to a one-class classifier. The positive patches highlight the suspicious areas containing anomalies in a large medical image. The proposed method has been tested on InBreast dataset and achieves an AUC of 0.84. The main contribution of our work can be summarized as follows. 1) The proposed one-class learning requires only data from one class, i.e., the negative data; 2) The patch-based learning makes the proposed method scalable to images of different sizes and helps avoid the large scale problem for medical images; 3) The training of the proposed deep convolutional neural network (DCNN) based auto-encoder is fast and stable.

  17. Unified and Modular Modeling and Functional Verification Framework of Real-Time Image Signal Processors

    Directory of Open Access Journals (Sweden)

    Abhishek Jain

    2016-01-01

    Full Text Available In VLSI industry, image signal processing algorithms are developed and evaluated using software models before implementation of RTL and firmware. After the finalization of the algorithm, software models are used as a golden reference model for the image signal processor (ISP RTL and firmware development. In this paper, we are describing the unified and modular modeling framework of image signal processing algorithms used for different applications such as ISP algorithms development, reference for hardware (HW implementation, reference for firmware (FW implementation, and bit-true certification. The universal verification methodology- (UVM- based functional verification framework of image signal processors using software reference models is described. Further, IP-XACT based tools for automatic generation of functional verification environment files and model map files are described. The proposed framework is developed both with host interface and with core using virtual register interface (VRI approach. This modeling and functional verification framework is used in real-time image signal processing applications including cellphone, smart cameras, and image compression. The main motivation behind this work is to propose the best efficient, reusable, and automated framework for modeling and verification of image signal processor (ISP designs. The proposed framework shows better results and significant improvement is observed in product verification time, verification cost, and quality of the designs.

  18. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  19. Image-based Modeling of Biofilm-induced Calcium Carbonate Precipitation

    Science.gov (United States)

    Connolly, J. M.; Rothman, A.; Jackson, B.; Klapper, I.; Cunningham, A. B.; Gerlach, R.

    2013-12-01

    Pore scale biological processes in the subsurface environment are important to understand in relation to many engineering applications including environmental contaminant remediation, geologic carbon sequestration, and petroleum production. Specifically, biofilm induced calcium carbonate precipitation has been identified as an attractive option to reduce permeability in a lasting way in the subsurface. This technology may be able to replace typical cement-based grouting in some circumstances; however, pore-scale processes must be better understood for it to be applied in a controlled manor. The work presented will focus on efforts to observe biofilm growth and ureolysis-induced mineral precipitation in micro-fabricated flow cells combined with finite element modelling as a tool to predict local chemical gradients of interest (see figure). We have been able to observe this phenomenon over time using a novel model organism that is able to hydrolyse urea and express a fluorescent protein allowing for non-invasive observation over time with confocal microscopy. The results of this study show the likely existence of a wide range of local saturation indices even in a small (1 cm length scale) experimental system. Interestingly, the locations of high predicted index do not correspond to the locations of higher precipitation density, highlighting the need for further understanding. Figure 1 - A micro-fabricated flow cell containing biofilm-induced calcium carbonate precipitation. (A) Experimental results: Active biofilm is in green and dark circles are calcium carbonate crystals. Note the channeling behavior in the top of the image, leaving a large hydraulically inactive area in the biofilm mass. (B) Finite element model: The prediction of relative saturation of calcium carbonate (as calcite). Fluid enters the system at a low saturation state (blue) but areas of high supersaturation (red) are predicted within the hydraulically inactive area in the biofilm. If only effluent

  20. Imaging cerebral haemorrhage with magnetic induction tomography: numerical modelling.

    Science.gov (United States)

    Zolgharni, M; Ledger, P D; Armitage, D W; Holder, D S; Griffiths, H

    2009-06-01

    Magnetic induction tomography (MIT) is a new electromagnetic imaging modality which has the potential to image changes in the electrical conductivity of the brain due to different pathologies. In this study the feasibility of detecting haemorrhagic cerebral stroke with a 16-channel MIT system operating at 10 MHz was investigated. The finite-element method combined with a realistic, multi-layer, head model comprising 12 different tissues, was used for the simulations in the commercial FE package, Comsol Multiphysics. The eddy-current problem was solved and the MIT signals computed for strokes of different volumes occurring at different locations in the brain. The results revealed that a large, peripheral stroke (volume 49 cm(3)) produced phase changes that would be detectable with our currently achievable instrumentation phase noise level (17 m degrees ) in 70 (27%) of the 256 exciter/sensor channel combinations. However, reconstructed images showed that a lower noise level than this, of 1 m degrees , was necessary to obtain good visualization of the strokes. The simulated MIT measurements were compared with those from an independent transmission-line-matrix model in order to give confidence in the results.

  1. PROCESSING OF UAV BASED RANGE IMAGING DATA TO GENERATE DETAILED ELEVATION MODELS OF COMPLEX NATURAL STRUCTURES

    Directory of Open Access Journals (Sweden)

    T. K. Kohoutek

    2012-07-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are more and more used in civil areas like geomatics. Autonomous navigated platforms have a great flexibility in flying and manoeuvring in complex environments to collect remote sensing data. In contrast to standard technologies such as aerial manned platforms (airplanes and helicopters UAVs are able to fly closer to the object and in small-scale areas of high-risk situations such as landslides, volcano and earthquake areas and floodplains. Thus, UAVs are sometimes the only practical alternative in areas where access is difficult and where no manned aircraft is available or even no flight permission is given. Furthermore, compared to terrestrial platforms, UAVs are not limited to specific view directions and could overcome occlusions from trees, houses and terrain structures. Equipped with image sensors and/or laser scanners they are able to provide elevation models, rectified images, textured 3D-models and maps. In this paper we will describe a UAV platform, which can carry a range imaging (RIM camera including power supply and data storage for the detailed mapping and monitoring of complex structures, such as alpine riverbed areas. The UAV platform NEO from Swiss UAV was equipped with the RIM camera CamCube 2.0 by PMD Technologies GmbH to capture the surface structures. Its navigation system includes an autopilot. To validate the UAV-trajectory a 360° prism was installed and tracked by a total station. Within the paper a workflow for the processing of UAV-RIM data is proposed, which is based on the processing of differential GNSS data in combination with the acquired range images. Subsequently, the obtained results for the trajectory are compared and verified with a track of a UAV (Falcon 8, Ascending Technologies carried out with a total station simultaneously to the GNSS data acquisition. The results showed that the UAV's position using differential GNSS could be determined in the centimetre to the decimetre

  2. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  3. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    Science.gov (United States)

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  4. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    Science.gov (United States)

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    Science.gov (United States)

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  6. Fluid flow in porous media using image-based modelling to parametrize Richards' equation.

    Science.gov (United States)

    Cooper, L J; Daly, K R; Hallett, P D; Naveed, M; Koebernick, N; Bengough, A G; George, T S; Roose, T

    2017-11-01

    The parameters in Richards' equation are usually calculated from experimentally measured values of the soil-water characteristic curve and saturated hydraulic conductivity. The complex pore structures that often occur in porous media complicate such parametrization due to hysteresis between wetting and drying and the effects of tortuosity. Rather than estimate the parameters in Richards' equation from these indirect measurements, image-based modelling is used to investigate the relationship between the pore structure and the parameters. A three-dimensional, X-ray computed tomography image stack of a soil sample with voxel resolution of 6 μm has been used to create a computational mesh. The Cahn-Hilliard-Stokes equations for two-fluid flow, in this case water and air, were applied to this mesh and solved using the finite-element method in COMSOL Multiphysics. The upscaled parameters in Richards' equation are then obtained via homogenization. The effect on the soil-water retention curve due to three different contact angles, 0°, 20° and 60°, was also investigated. The results show that the pore structure affects the properties of the flow on the large scale, and different contact angles can change the parameters for Richards' equation.

  7. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  8. Simultaneous observation of auroral substorm onset in Polar satellite global images and ground-based all-sky images

    Science.gov (United States)

    Ieda, Akimasa; Kauristie, Kirsti; Nishimura, Yukitoshi; Miyashita, Yukinaga; Frey, Harald U.; Juusola, Liisa; Whiter, Daniel; Nosé, Masahito; Fillingim, Matthew O.; Honary, Farideh; Rogers, Neil C.; Miyoshi, Yoshizumi; Miura, Tsubasa; Kawashima, Takahiro; Machida, Shinobu

    2018-05-01

    Substorm onset has originally been defined as a longitudinally extended sudden auroral brightening (Akasofu initial brightening: AIB) followed a few minutes later by an auroral poleward expansion in ground-based all-sky images (ASIs). In contrast, such clearly marked two-stage development has not been evident in satellite-based global images (GIs). Instead, substorm onsets have been identified as localized sudden brightenings that expand immediately poleward. To resolve these differences, optical substorm onset signatures in GIs and ASIs are compared in this study for a substorm that occurred on December 7, 1999. For this substorm, the Polar satellite ultraviolet global imager was operated with a fixed-filter (170 nm) mode, enabling a higher time resolution (37 s) than usual to resolve the possible two-stage development. These data were compared with 20-s resolution green-line (557.7 nm) ASIs at Muonio in Finland. The ASIs revealed the AIB at 2124:50 UT and the subsequent poleward expansion at 2127:50 UT, whereas the GIs revealed only an onset brightening that started at 2127:49 UT. Thus, the onset in the GIs was delayed relative to the AIB and in fact agreed with the poleward expansion in the ASIs. The fact that the AIB was not evident in the GIs may be attributed to the limited spatial resolution of GIs for thin auroral arc brightenings. The implications of these results for the definition of substorm onset are discussed herein.[Figure not available: see fulltext.

  9. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  10. Improving fault image by determination of optimum seismic survey parameters using ray-based modeling

    Science.gov (United States)

    Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali

    2018-06-01

    In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.

  11. Perona Malik anisotropic diffusion model using Peaceman Rachford scheme on digital radiographic image

    International Nuclear Information System (INIS)

    Halim, Suhaila Abd; Razak, Rohayu Abd; Ibrahim, Arsmah; Manurung, Yupiter HP

    2014-01-01

    In image processing, it is important to remove noise without affecting the image structure as well as preserving all the edges. Perona Malik Anisotropic Diffusion (PMAD) is a PDE-based model which is suitable for image denoising and edge detection problems. In this paper, the Peaceman Rachford scheme is applied on PMAD to remove unwanted noise as the scheme is efficient and unconditionally stable. The capability of the scheme to remove noise is evaluated on several digital radiography weld defect images computed using MATLAB R2009a. Experimental results obtained show that the Peaceman Rachford scheme improves the image quality substantially well based on the Peak Signal to Noise Ratio (PSNR). The Peaceman Rachford scheme used in solving the PMAD model successfully removes unwanted noise in digital radiographic image

  12. Perona Malik anisotropic diffusion model using Peaceman Rachford scheme on digital radiographic image

    Energy Technology Data Exchange (ETDEWEB)

    Halim, Suhaila Abd; Razak, Rohayu Abd; Ibrahim, Arsmah [Center of Mathematics Studies, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, 40450 Shah Alam. Selangor DE (Malaysia); Manurung, Yupiter HP [Advanced Manufacturing Technology Excellence Center (AMTEx), Faculty of Mechanical Engineering, Universiti Teknologi MARA, 40450 Shah Alam. Selangor DE (Malaysia)

    2014-06-19

    In image processing, it is important to remove noise without affecting the image structure as well as preserving all the edges. Perona Malik Anisotropic Diffusion (PMAD) is a PDE-based model which is suitable for image denoising and edge detection problems. In this paper, the Peaceman Rachford scheme is applied on PMAD to remove unwanted noise as the scheme is efficient and unconditionally stable. The capability of the scheme to remove noise is evaluated on several digital radiography weld defect images computed using MATLAB R2009a. Experimental results obtained show that the Peaceman Rachford scheme improves the image quality substantially well based on the Peak Signal to Noise Ratio (PSNR). The Peaceman Rachford scheme used in solving the PMAD model successfully removes unwanted noise in digital radiographic image.

  13. Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS

    International Nuclear Information System (INIS)

    Zhang Ran; Chen Zhiqiang; Huang Zhifeng; Xiao Yongshun; Wang Xuewu; Wie Jie; Loong, C.-K.

    2011-01-01

    Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.

  14. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    Science.gov (United States)

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  15. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging

    Directory of Open Access Journals (Sweden)

    Shuanghui Zhang

    2016-04-01

    Full Text Available This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP estimation and the maximum likelihood estimation (MLE are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  16. Multimodality Imaging with Silica-Based Targeted Nanoparticle Platforms

    International Nuclear Information System (INIS)

    Lewis, Jason S.

    2012-01-01

    Objectives: To synthesize and characterize a C-Dot silica-based nanoparticle containing 'clickable' groups for the subsequent attachment of targeting moieties (e.g., peptides) and multiple contrast agents (e.g., radionuclides with high specific activity) (1,2). These new constructs will be tested in suitable tumor models in vitro and in vivo to ensure maintenance of target-specificity and high specific activity. Methods: Cy5 dye molecules are cross-linked to a silica precursor which is reacted to form a dye-rich core particle. This core is then encapsulated in a layer of pure silica to create the core-shell C-Dot (Figure 1) (2). A 'click' chemistry approach has been used to functionalize the silica shell with radionuclides conferring high contrast and specific activity (e.g. 64Cu and 89Zr) and peptides for tumor targeting (e.g. cRGD and octreotate) (3). Based on the selective Diels-Alder reaction between tetrazine and norbornene, the reaction is bioorthogonal, highyielding, rapid, and water-compatible. This radiolabeling approach has already been employed successfully with both short peptides (e.g. octreotate) and antibodies (e.g. trastuzumab) as model systems for the ultimate labeling of the nanoparticles (1). Results: PEGylated C-Dots with a Cy5 core and labeled with tetrazine have been synthesized (d = 55 nm, zeta potential = -3 mV) reliably and reproducibly and have been shown to be stable under physiological conditions for up to 1 month. Characterization of the nanoparticles revealed that the immobilized Cy5 dye within the C-Dots exhibited fluorescence intensities over twice that of the fluorophore alone. The nanoparticles were successfully radiolabeled with Cu-64. Efforts toward the conjugation of targeting peptides (e.g. cRGD) are underway. In vitro stability, specificity, and uptake studies as well as in vivo imaging and biodistribution investigations will be presented. Conclusions: C-Dot silica-based nanoparticles offer a robust, versatile, and multi

  17. A fast combinatorial enhancement technique for earthquake damage identification based on remote sensing image

    Science.gov (United States)

    Dou, Aixia; Wang, Xiaoqing; Ding, Xiang; Du, Zecheng

    2010-11-01

    On the basis of the study on the enhancement methods of remote sensing images obtained after several earthquakes, the paper designed a new and optimized image enhancement model which was implemented by combining different single methods. The patterns of elementary model units and combined types of model were defined. Based on the enhancement model database, the algorithm of combinatorial model was brought out via C++ programming. The combined model was tested by processing the aerial remote sensing images obtained after 1976 Tangshan earthquake. It was proved that the definition and implementation of combined enhancement model can efficiently improve the ability and flexibility of image enhancement algorithm.

  18. Clinical Application of Solid Model Based on Trabecular Tibia Bone CT Images Created by 3D Printer.

    Science.gov (United States)

    Cho, Jaemo; Park, Chan-Soo; Kim, Yeoun-Jae; Kim, Kwang Gi

    2015-07-01

    The aim of this work is to use a 3D solid model to predict the mechanical loads of human bone fracture risk associated with bone disease conditions according to biomechanical engineering parameters. We used special image processing tools for image segmentation and three-dimensional (3D) reconstruction to generate meshes, which are necessary for the production of a solid model with a 3D printer from computed tomography (CT) images of the human tibia's trabecular and cortical bones. We examined the defects of the mechanism for the tibia's trabecular bones. Image processing tools and segmentation techniques were used to analyze bone structures and produce a solid model with a 3D printer. These days, bio-imaging (CT and magnetic resonance imaging) devices are able to display and reconstruct 3D anatomical details, and diagnostics are becoming increasingly vital to the quality of patient treatment planning and clinical treatment. Furthermore, radiographic images are being used to study biomechanical systems with several aims, namely, to describe and simulate the mechanical behavior of certain anatomical systems, to analyze pathological bone conditions, to study tissues structure and properties, and to create a solid model using a 3D printer to support surgical planning and reduce experimental costs. These days, research using image processing tools and segmentation techniques to analyze bone structures to produce a solid model with a 3D printer is rapidly becoming very important.

  19. Model-based segmentation of abdominal aortic aneurysms in CTA images

    Science.gov (United States)

    de Bruijne, Marleen; van Ginneken, Bram; Niessen, Wiro J.; Loog, Marco; Viergever, Max A.

    2003-05-01

    Segmentation of thrombus in abdominal aortic aneurysms is complicated by regions of low boundary contrast and by the presence of many neighboring structures in close proximity to the aneurysm wall. We present an automated method that is similar to the well known Active Shape Models (ASM), combining a three-dimensional shape model with a one-dimensional boundary appearance model. Our contribution is twofold: we developed a non-parametric appearance modeling scheme that effectively deals with a highly varying background, and we propose a way of generalizing models of curvilinear structures from small training sets. In contrast with the conventional ASM approach, the new appearance model trains on both true and false examples of boundary profiles. The probability that a given image profile belongs to the boundary is obtained using k nearest neighbor (kNN) probability density estimation. The performance of this scheme is compared to that of original ASMs, which minimize the Mahalanobis distance to the average true profile in the training set. The generalizability of the shape model is improved by modeling the objects axis deformation independent of its cross-sectional deformation. A leave-one-out experiment was performed on 23 datasets. Segmentation using the kNN appearance model significantly outperformed the original ASM scheme; average volume errors were 5.9% and 46% respectively.

  20. Restoration of motion-blurred image based on border deformation detection: a traffic sign restoration model.

    Directory of Open Access Journals (Sweden)

    Yiliang Zeng

    Full Text Available Due to the rapid development of motor vehicle Driver Assistance Systems (DAS, the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.

  1. 3-D Image Encryption Based on Rubik's Cube and RC6 Algorithm

    Science.gov (United States)

    Helmy, Mai; El-Rabaie, El-Sayed M.; Eldokany, Ibrahim M.; El-Samie, Fathi E. Abd

    2017-12-01

    A novel encryption algorithm based on the 3-D Rubik's cube is proposed in this paper to achieve 3D encryption of a group of images. This proposed encryption algorithm begins with RC6 as a first step for encrypting multiple images, separately. After that, the obtained encrypted images are further encrypted with the 3-D Rubik's cube. The RC6 encrypted images are used as the faces of the Rubik's cube. From the concepts of image encryption, the RC6 algorithm adds a degree of diffusion, while the Rubik's cube algorithm adds a degree of permutation. The simulation results demonstrate that the proposed encryption algorithm is efficient, and it exhibits strong robustness and security. The encrypted images are further transmitted over wireless Orthogonal Frequency Division Multiplexing (OFDM) system and decrypted at the receiver side. Evaluation of the quality of the decrypted images at the receiver side reveals good results.

  2. Feasibility of low-dose CT with model-based iterative image reconstruction in follow-up of patients with testicular cancer

    International Nuclear Information System (INIS)

    Murphy, Kevin P.; Crush, Lee; O’Neill, Siobhan B.; Foody, James; Breen, Micheál; Brady, Adrian; Kelly, Paul J.; Power, Derek G.; Sweeney, Paul; Bye, Jackie; O’Connor, Owen J.; Maher, Michael M.; O’Regan, Kevin N.

    2016-01-01

    •Radiologists should endeavour to minimise radiation exposure to patients with testicular cancer.•Iterative reconstruction algorithms permit CT imaging at lower radiation doses.•Image quality for reduced-dose CT–MBIR is at least comparable to conventional dose.•No loss of diagnostic accuracy apparent with reduced-dose CT–MBIR. Radiologists should endeavour to minimise radiation exposure to patients with testicular cancer. Iterative reconstruction algorithms permit CT imaging at lower radiation doses. Image quality for reduced-dose CT–MBIR is at least comparable to conventional dose. No loss of diagnostic accuracy apparent with reduced-dose CT–MBIR. We examine the performance of pure model-based iterative reconstruction with reduced-dose CT in follow-up of patients with early-stage testicular cancer. Sixteen patients (mean age 35.6 ± 7.4 years) with stage I or II testicular cancer underwent conventional dose (CD) and low-dose (LD) CT acquisition during CT surveillance. LD data was reconstructed with model-based iterative reconstruction (LD–MBIR). Datasets were objectively and subjectively analysed at 8 anatomical levels. Two blinded clinical reads were compared to gold-standard assessment for diagnostic accuracy. Mean radiation dose reduction of 67.1% was recorded. Mean dose measurements for LD–MBIR were: thorax – 66 ± 11 mGy cm (DLP), 1.0 ± 0.2 mSv (ED), 2.0 ± 0.4 mGy (SSDE); abdominopelvic – 128 ± 38 mGy cm (DLP), 1.9 ± 0.6 mSv (ED), 3.0 ± 0.6 mGy (SSDE). Objective noise and signal-to-noise ratio values were comparable between the CD and LD–MBIR images. LD–MBIR images were superior (p < 0.001) with regard to subjective noise, streak artefact, 2-plane contrast resolution, 2-plane spatial resolution and diagnostic acceptability. All patients were correctly categorised as positive, indeterminate or negative for metastatic disease by 2 readers on LD–MBIR and CD datasets. MBIR facilitated a 67% reduction in radiation dose whilst

  3. Model-based traction force microscopy reveals differential tension in cellular actin bundles.

    Science.gov (United States)

    Soiné, Jérôme R D; Brand, Christoph A; Stricker, Jonathan; Oakes, Patrick W; Gardel, Margaret L; Schwarz, Ulrich S

    2015-03-01

    Adherent cells use forces at the cell-substrate interface to sense and respond to the physical properties of their environment. These cell forces can be measured with traction force microscopy which inverts the equations of elasticity theory to calculate them from the deformations of soft polymer substrates. We introduce a new type of traction force microscopy that in contrast to traditional methods uses additional image data for cytoskeleton and adhesion structures and a biophysical model to improve the robustness of the inverse procedure and abolishes the need for regularization. We use this method to demonstrate that ventral stress fibers of U2OS-cells are typically under higher mechanical tension than dorsal stress fibers or transverse arcs.

  4. Imaging noradrenergic influence on amyloid pathology in mouse models of Alzheimer's disease

    International Nuclear Information System (INIS)

    Winkeler, A.; Waerzeggers, Y.; Klose, A.; Monfared, P.; Thomas, A.V.; Jacobs, A.H.; Schubert, M.; Heneka, M.T.

    2008-01-01

    Molecular imaging aims towards the non-invasive characterization of disease-specific molecular alterations in the living organism in vivo. In that, molecular imaging opens a new dimension in our understanding of disease pathogenesis, as it allows the non-invasive determination of the dynamics of changes on the molecular level. The imaging technology being employed includes magnetic resonance imaging (MRI) and nuclear imaging as well as optical-based imaging technologies. These imaging modalities are employed together or alone for disease phenotyping, development of imaging-guided therapeutic strategies and in basic and translational research. In this study, we review recent investigations employing positron emission tomography and MRI for phenotyping mouse models of Alzheimers' disease by imaging. We demonstrate that imaging has an important role in the characterization of mouse models of neurodegenerative diseases. (orig.)

  5. Imaging in tuberculosis of the skull and skull-base: case report

    International Nuclear Information System (INIS)

    Sencer, S.; Aydin, K.; Poyanli, A.; Minareci, O.; Sencer, A.; Hepguel, K.

    2003-01-01

    We report a 19-year-old girl, who presented with headache and tonic/clonic seizures. Imaging revealed a lytic parietal skull lesion with an adjacent epidural mass, masses in the right parietal lobe and a posterior skull-base mass. The diagnosis of tuberculosis was made after resection of the extradural mass and later verified with culture of Mycobacterium tuberculosis. The parenchymal and skull-base lesions resolved following antituberculous treatment. We present CT, scintigraphic, angiographic and MRI findings. (orig.)

  6. Optimizing Global Coronal Magnetic Field Models Using Image-Based Constraints

    Science.gov (United States)

    Jones-Mecholsky, Shaela I.; Davila, Joseph M.; Uritskiy, Vadim

    2016-01-01

    The coronal magnetic field directly or indirectly affects a majority of the phenomena studied in the heliosphere. It provides energy for coronal heating, controls the release of coronal mass ejections, and drives heliospheric and magnetospheric activity, yet the coronal magnetic field itself has proven difficult to measure. This difficulty has prompted a decades-long effort to develop accurate, timely, models of the field, an effort that continues today. We have developed a method for improving global coronal magnetic field models by incorporating the type of morphological constraints that could be derived from coronal images. Here we report promising initial tests of this approach on two theoretical problems, and discuss opportunities for application.

  7. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  8. Image inpainting based on stacked autoencoders

    International Nuclear Information System (INIS)

    Shcherbakov, O; Batishcheva, V

    2014-01-01

    Recently we have proposed the algorithm for the problem of image inpaiting (filling in occluded or damaged parts of images). This algorithm was based on the criterion spectrum entropy and showed promising results despite of using hand-crafted representation of images. In this paper, we present a method for solving image inpaiting task based on learning some image representation. Some results are shown to illustrate quality of image reconstruction.

  9. Evidence based medical imaging (EBMI)

    International Nuclear Information System (INIS)

    Smith, Tony

    2008-01-01

    Background: The evidence based paradigm was first described about a decade ago. Previous authors have described a framework for the application of evidence based medicine which can be readily adapted to medical imaging practice. Purpose: This paper promotes the application of the evidence based framework in both the justification of the choice of examination type and the optimisation of the imaging technique used. Methods: The framework includes five integrated steps: framing a concise clinical question; searching for evidence to answer that question; critically appraising the evidence; applying the evidence in clinical practice; and, evaluating the use of revised practices. Results: This paper illustrates the use of the evidence based framework in medical imaging (that is, evidence based medical imaging) using the examples of two clinically relevant case studies. In doing so, a range of information technology and other resources available to medical imaging practitioners are identified with the intention of encouraging the application of the evidence based paradigm in radiography and radiology. Conclusion: There is a perceived need for radiographers and radiologists to make greater use of valid research evidence from the literature to inform their clinical practice and thus provide better quality services

  10. Computationally-optimized bone mechanical modeling from high-resolution structural images.

    Directory of Open Access Journals (Sweden)

    Jeremy F Magland

    Full Text Available Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time, which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM. To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging.

  11. Delayed contrast enhancement imaging of a murine model for ischemia reperfusion with carbon nanotube micro-CT.

    Directory of Open Access Journals (Sweden)

    Laurel M Burk

    Full Text Available We aim to demonstrate the application of free-breathing prospectively gated carbon nanotube (CNT micro-CT by evaluating a myocardial infarction model with a delayed contrast enhancement technique. Evaluation of murine cardiac models using micro-CT imaging has historically been limited by extreme imaging requirements. Newly-developed CNT-based x-ray sources offer precise temporal resolution, allowing elimination of physiological motion through prospective gating. Using free-breathing, cardiac-gated CNT micro-CT, a myocardial infarction model can be studied non-invasively and with high resolution. Myocardial infarction was induced in eight male C57BL/6 mice aged 8-12 weeks. The ischemia reperfusion model was achieved by surgically occluding the LAD artery for 30 minutes followed by 24 hours of reperfusion. Tail vein catheters were placed for contrast administration. Iohexol 300 mgI/mL was administered followed by images obtained in diastole. Iodinated lipid blood pool contrast agent was then administered, followed with images at systole and diastole. Respiratory and cardiac signals were monitored externally and used to gate the scans of free-breathing subjects. Seven control animals were scanned using the same imaging protocol. After imaging, the heart was harvested, cut into 1mm slices and stained with TTC. Post-processing analysis was performed using ITK-Snap and MATLAB. All animals demonstrated obvious delayed contrast enhancement in the left ventricular wall following the Iohexol injection. The blood pool contrast agent revealed significant changes in cardiac function quantified by 3-D volume ejection fractions. All subjects demonstrated areas of myocardial infarct in the LAD distribution on both TTC staining and micro-CT imaging. The CNT micro-CT system aids straightforward, free-breathing, prospectively-gated 3-D murine cardiac imaging. Delayed contrast enhancement allows identification of infarcted myocardium after a myocardial ischemic

  12. Microultrasound Molecular Imaging of Vascular Endothelial Growth Factor Receptor 2 in a Mouse Model of Tumor Angiogenesis

    Directory of Open Access Journals (Sweden)

    Joshua J. Rychak

    2007-09-01

    Full Text Available High-frequency microultrasound imaging of tumor progression in mice enables noninvasive anatomic and functional imaging at excellent spatial and temporal resolution, although microultrasonography alone does not offer molecular scale data. In the current study, we investigated the use of microbubble ultrasound contrast agents bearing targeting ligands specific for molecular markers of tumor angiogenesis using high-frequency microultrasound imaging. A xenograft tumor model in the mouse was used to image vascular endothelial growth factor receptor 2 (VEGFR-2 expression with microbubbles conjugated to an anti-VEGFR-2 monoclonal antibody or an isotype control. Microultrasound imaging was accomplished at a center frequency of 40 MHz, which provided lateral and axial resolutions of 40 and 90 μm, respectively. The B-mode (two-dimensional mode acoustic signal from microbubbles bound to the molecular target was determined by an ultrasound-based destruction-subtraction scheme. Quantification of the adherent microbubble fraction in nine tumor-bearing mice revealed significant retention of VEGFR-2-targeted microbubbles relative to control-targeted microbubbles. These data demonstrate that contrast-enhanced microultrasound imaging is a useful method for assessing molecular expression of tumor angiogenesis in mice at high resolution.

  13. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    International Nuclear Information System (INIS)

    Zhong, Z; Zhuang, L; Gu, X; Wang, J; Chen, H; Zhen, X

    2016-01-01

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  14. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Z; Zhuang, L [Wayne State University, Detroit, MI (United States); Gu, X; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Chen, H; Zhen, X [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  15. An Image-based Micro-continuum Pore-scale Model for Gas Transport in Organic-rich Shale

    Science.gov (United States)

    Guo, B.; Tchelepi, H.

    2017-12-01

    Gas production from unconventional source rocks, such as ultra-tight shales, has increased significantly over the past decade. However, due to the extremely small pores ( 1-100 nm) and the strong material heterogeneity, gas flow in shale is still not well understood and poses challenges for predictive field-scale simulations. In recent years, digital rock analysis has been applied to understand shale gas transport at the pore-scale. An issue with rock images (e.g. FIB-SEM, nano-/micro-CT images) is the so-called "cutoff length", i.e., pores and heterogeneities below the resolution cannot be resolved, which leads to two length scales (resolved features and unresolved sub-resolution features) that are challenging for flow simulations. Here we develop a micro-continuum model, modified from the classic Darcy-Brinkman-Stokes framework, that can naturally couple the resolved pores and the unresolved nano-porous regions. In the resolved pores, gas flow is modeled with Stokes equation. In the unresolved regions where the pore sizes are below the image resolution, we develop an apparent permeability model considering non-Darcy flow at the nanoscale including slip flow, Knudsen diffusion, adsorption/desorption, surface diffusion, and real gas effect. The end result is a micro-continuum pore-scale model that can simulate gas transport in 3D reconstructed shale images. The model has been implemented in the open-source simulation platform OpenFOAM. In this paper, we present case studies to demonstrate the applicability of the model, where we use 3D segmented FIB-SEM and nano-CT shale images that include four material constituents: organic matter, clay, granular mineral, and pore. In addition to the pore structure and the distribution of the material constituents, we populate the model with experimental measurements (e.g. size distribution of the sub-resolution pores from nitrogen adsorption) and parameters from the literature and identify the relative importance of different

  16. General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition.

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun

    2017-06-01

    Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.

  17. Computer model for harmonic ultrasound imaging.

    Science.gov (United States)

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  18. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  19. A novel modeling method for manufacturing hearing aid using 3D medical images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyeong Gyun [Dept of Radiological Science, Far East University, Eumseong (Korea, Republic of)

    2016-06-15

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape.

  20. A novel modeling method for manufacturing hearing aid using 3D medical images

    International Nuclear Information System (INIS)

    Kim, Hyeong Gyun

    2016-01-01

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape