WorldWideScience

Sample records for computer image analysis

  1. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  2. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  3. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  4. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  5. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Science.gov (United States)

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  6. Medical image computing and computer-assisted intervention - MICCAI 2005. Proceedings; Pt. 1

    International Nuclear Information System (INIS)

    Duncan, J.S.; Gerig, G.

    2005-01-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  7. Medical image computing and computer science intervention. MICCAI 2005. Pt. 2. Proceedings

    International Nuclear Information System (INIS)

    Duncan, J.S.; Yale Univ., New Haven, CT; Gerig, G.

    2005-01-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  8. Medical image computing and computer-assisted intervention - MICCAI 2005. Proceedings; Pt. 1

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, J.S. [Yale Univ., New Haven, CT (United States). Dept. of Biomedical Engineering and Diagnostic Radiology; Gerig, G. (eds.) [North Carolina Univ., Chapel Hill (United States). Dept. of Computer Science

    2005-07-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  9. Medical image computing and computer science intervention. MICCAI 2005. Pt. 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, J.S. [Yale Univ., New Haven, CT (United States). Dept. of Biomedical Engineering]|[Yale Univ., New Haven, CT (United States). Dept. of Diagnostic Radiology; Gerig, G. (eds.) [North Carolina Univ., Chapel Hill, NC (United States). Dept. of Computer Science

    2005-07-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  10. Computer-Assisted Digital Image Analysis of Plus Disease in Retinopathy of Prematurity.

    Science.gov (United States)

    Kemp, Pavlina S; VanderVeen, Deborah K

    2016-01-01

    The objective of this study is to review the current state and role of computer-assisted analysis in diagnosis of plus disease in retinopathy of prematurity. Diagnosis and documentation of retinopathy of prematurity are increasingly being supplemented by digital imaging. The incorporation of computer-aided techniques has the potential to add valuable information and standardization regarding the presence of plus disease, an important criterion in deciding the necessity of treatment of vision-threatening retinopathy of prematurity. A review of literature found that several techniques have been published examining the process and role of computer aided analysis of plus disease in retinopathy of prematurity. These techniques use semiautomated image analysis techniques to evaluate retinal vascular dilation and tortuosity, using calculated parameters to evaluate presence or absence of plus disease. These values are then compared with expert consensus. The study concludes that computer-aided image analysis has the potential to use quantitative and objective criteria to act as a supplemental tool in evaluating for plus disease in the setting of retinopathy of prematurity.

  11. The application of computer image analysis in life sciences and environmental engineering

    Science.gov (United States)

    Mazur, R.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.

    2014-04-01

    The main aim of the article was to present research on the application of computer image analysis in Life Science and Environmental Engineering. The authors used different methods of computer image analysis in developing of an innovative biotest in modern biomonitoring of water quality. Created tools were based on live organisms such as bioindicators Lemna minor L. and Hydra vulgaris Pallas as well as computer image analysis method in the assessment of negatives reactions during the exposition of the organisms to selected water toxicants. All of these methods belong to acute toxicity tests and are particularly essential in ecotoxicological assessment of water pollutants. Developed bioassays can be used not only in scientific research but are also applicable in environmental engineering and agriculture in the study of adverse effects on water quality of various compounds used in agriculture and industry.

  12. Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.

    Science.gov (United States)

    Handels, H; Ehrhardt, J

    2009-01-01

    Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or

  13. Computer image analysis of etched tracks from ionizing radiation

    Science.gov (United States)

    Blanford, George E.

    1994-01-01

    I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.

  14. Analysis of cardiac images of radionuclide ventriculography in AT-Type personal computer

    International Nuclear Information System (INIS)

    Lillo, R.; Gonzalez, P.; Ehijo, A.; Otarola, T.M.S.; Ortiz, M.; Silva, A.M.; Ortiz, M.

    1990-01-01

    The goal of this research was to produce software for the processing of Cardiac Phase images in personal computers. The results of standard radionuclide Ventriculography and Fourier analysis, got on gamma camera Ohio Nuclear 410 Sygma and Digital PDP 11/34 computer were coded into ASCII file and then transfered via Smarterm 220/Kermit to an Accel 900 AT PC. After decoding the images they were processed with a program develope in C Lenguaje obtaining the values of Phase Angles in the whole phase images and in regions of interest drawn around the cardiac chambers. The images and values were the same as those obtained by conventional processing in the PDP 11/34 computer. This is considered a first stage for the use of PC to Nuclear Medicine imaging studies. (author)

  15. Digital tomosynthesis parallel imaging computational analysis with shift and add and back projection reconstruction algorithms.

    Science.gov (United States)

    Chen, Ying; Balla, Apuroop; Rayford II, Cleveland E; Zhou, Weihua; Fang, Jian; Cong, Linlin

    2010-01-01

    Digital tomosynthesis is a novel technology that has been developed for various clinical applications. Parallel imaging configuration is utilised in a few tomosynthesis imaging areas such as digital chest tomosynthesis. Recently, parallel imaging configuration for breast tomosynthesis began to appear too. In this paper, we present the investigation on computational analysis of impulse response characterisation as the start point of our important research efforts to optimise the parallel imaging configurations. Results suggest that impulse response computational analysis is an effective method to compare and optimise imaging configurations.

  16. Seventh Medical Image Computing and Computer Assisted Intervention Conference (MICCAI 2012)

    CERN Document Server

    Miller, Karol; Nielsen, Poul; Computational Biomechanics for Medicine : Models, Algorithms and Implementation

    2013-01-01

    One of the greatest challenges for mechanical engineers is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, biomedical sciences, and medicine. This book is an opportunity for computational biomechanics specialists to present and exchange opinions on the opportunities of applying their techniques to computer-integrated medicine. Computational Biomechanics for Medicine: Models, Algorithms and Implementation collects the papers from the Seventh Computational Biomechanics for Medicine Workshop held in Nice in conjunction with the Medical Image Computing and Computer Assisted Intervention conference. The topics covered include: medical image analysis, image-guided surgery, surgical simulation, surgical intervention planning, disease prognosis and diagnostics, injury mechanism analysis, implant and prostheses design, and medical robotics.

  17. Computational intelligence in biomedical imaging

    CERN Document Server

    2014-01-01

    This book provides a comprehensive overview of the state-of-the-art computational intelligence research and technologies in biomedical images with emphasis on biomedical decision making. Biomedical imaging offers useful information on patients’ medical conditions and clues to causes of their symptoms and diseases. Biomedical images, however, provide a large number of images which physicians must interpret. Therefore, computer aids are demanded and become indispensable in physicians’ decision making. This book discusses major technical advancements and research findings in the field of computational intelligence in biomedical imaging, for example, computational intelligence in computer-aided diagnosis for breast cancer, prostate cancer, and brain disease, in lung function analysis, and in radiation therapy. The book examines technologies and studies that have reached the practical level, and those technologies that are becoming available in clinical practices in hospitals rapidly such as computational inte...

  18. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  19. Application of computer intensive data analysis methods to the analysis of digital images and spatial data

    DEFF Research Database (Denmark)

    Windfeld, Kristian

    1992-01-01

    Computer-intensive methods for data analysis in a traditional setting has developed rapidly in the last decade. The application of and adaption of some of these methods to the analysis of multivariate digital images and spatial data are explored, evaluated and compared to well established classical...... into the projection pursuit is presented. Examples from remote sensing are given. The ACE algorithm for computing non-linear transformations for maximizing correlation is extended and applied to obtain a non-linear transformation that maximizes autocorrelation or 'signal' in a multivariate image....... This is a generalization of the minimum /maximum autocorrelation factors (MAF's) which is a linear method. The non-linear method is compared to the linear method when analyzing a multivariate TM image from Greenland. The ACE method is shown to give a more detailed decomposition of the image than the MAF-transformation...

  20. Computational image analysis of Suspension Plasma Sprayed YSZ coatings

    Directory of Open Access Journals (Sweden)

    Michalak Monika

    2017-01-01

    Full Text Available The paper presents the computational studies of microstructure- and topography- related features of suspension plasma sprayed (SPS coatings of yttria-stabilized zirconia (YSZ. The study mainly covers the porosity assessment, provided by ImageJ software analysis. The influence of boundary conditions, defined by: (i circularity and (ii size limits, on the computed values of porosity is also investigated. Additionally, the digital topography evaluation is performed: confocal laser scanning microscope (CLSM and scanning electron microscope (SEM operating in Shape from Shading (SFS mode measure surface roughness of deposited coatings. Computed values of porosity and roughness are referred to the variables of the spraying process, which influence the morphology of coatings and determines the possible fields of their applications.

  1. Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions

    Science.gov (United States)

    Ogiela, M. R.; Bodzioch, S.

    2011-06-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards automatic detection and interpretation of disease symptoms on processed US images. First, in this paper, there is presented a new heuristic method of filtering gallbladder contours from images. A major stage in this filtration is to segment and section off areas occupied by the said organ. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours, based on rank filtration, as well as on the analysis of line profile sections on tested organs. The second part concerns detecting the most important lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. The methodology of computer analysis of US gallbladder images presented here is clearly utilitarian in nature and after standardising can be used as a technique for supporting the diagnostics of selected gallbladder disorders using the images of this organ.

  2. Multimedia Image Technology and Computer Aided Manufacturing Engineering Analysis

    Science.gov (United States)

    Nan, Song

    2018-03-01

    Since the reform and opening up, with the continuous development of science and technology in China, more and more advanced science and technology have emerged under the trend of diversification. Multimedia imaging technology, for example, has a significant and positive impact on computer aided manufacturing engineering in China. From the perspective of scientific and technological advancement and development, the multimedia image technology has a very positive influence on the application and development of computer-aided manufacturing engineering, whether in function or function play. Therefore, this paper mainly starts from the concept of multimedia image technology to analyze the application of multimedia image technology in computer aided manufacturing engineering.

  3. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  4. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  5. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  6. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 1. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R. [Technical Univ. of Denmark, Lyngby (Denmark). Informatics and Mathematical Modelling; Nielsen, M. [IT Univ. of Copenhagen (Denmark); Sporring, J. (eds.) [Copenhagen Univ. (Denmark). Dept. of Computer Science

    2006-07-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  7. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R. [Technical Univ. of Denmark, Lyngby (Denmark). Informatics and Mathematical Modelling; Nielsen, M. [IT Univ. of Copenhagen (Denmark); Sporring, J. (eds.) [Copenhagen Univ. (Denmark). Dept. of Computer Science

    2006-07-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  8. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 2. Proceedings

    International Nuclear Information System (INIS)

    Larsen, R.; Sporring, J.

    2006-01-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  9. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 1. Proceedings

    International Nuclear Information System (INIS)

    Larsen, R.; Sporring, J.

    2006-01-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  10. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures.

    Science.gov (United States)

    Wong, Kelvin K L; Wang, Defeng; Ko, Jacky K L; Mazumdar, Jagannath; Le, Thu-Thao; Ghista, Dhanjoo

    2017-03-21

    Cardiac dysfunction constitutes common cardiovascular health issues in the society, and has been an investigation topic of strong focus by researchers in the medical imaging community. Diagnostic modalities based on echocardiography, magnetic resonance imaging, chest radiography and computed tomography are common techniques that provide cardiovascular structural information to diagnose heart defects. However, functional information of cardiovascular flow, which can in fact be used to support the diagnosis of many cardiovascular diseases with a myriad of hemodynamics performance indicators, remains unexplored to its full potential. Some of these indicators constitute important cardiac functional parameters affecting the cardiovascular abnormalities. With the advancement of computer technology that facilitates high speed computational fluid dynamics, the realization of a support diagnostic platform of hemodynamics quantification and analysis can be achieved. This article reviews the state-of-the-art medical imaging and high fidelity multi-physics computational analyses that together enable reconstruction of cardiovascular structures and hemodynamic flow patterns within them, such as of the left ventricle (LV) and carotid bifurcations. The combined medical imaging and hemodynamic analysis enables us to study the mechanisms of cardiovascular disease-causing dysfunctions, such as how (1) cardiomyopathy causes left ventricular remodeling and loss of contractility leading to heart failure, and (2) modeling of LV construction and simulation of intra-LV hemodynamics can enable us to determine the optimum procedure of surgical ventriculation to restore its contractility and health This combined medical imaging and hemodynamics framework can potentially extend medical knowledge of cardiovascular defects and associated hemodynamic behavior and their surgical restoration, by means of an integrated medical image diagnostics and hemodynamic performance analysis framework.

  11. Texture analysis of computed tomography images of acute ischemic stroke patients

    International Nuclear Information System (INIS)

    Oliveira, M.S.; Castellano, G.; Fernandes, P.T.; Avelar, W.M.; Santos, S.L.M.; Li, L.M.

    2009-01-01

    Computed tomography (CT) images are routinely used to assess ischemic brain stroke in the acute phase. They can provide important clues about whether to treat the patient by thrombolysis with tissue plasminogen activator. However, in the acute phase, the lesions may be difficult to detect in the images using standard visual analysis. The objective of the present study was to determine if texture analysis techniques applied to CT images of stroke patients could differentiate between normal tissue and affected areas that usually go unperceived under visual analysis. We performed a pilot study in which texture analysis, based on the gray level co-occurrence matrix, was applied to the CT brain images of 5 patients and of 5 control subjects and the results were compared by discriminant analysis. Thirteen regions of interest, regarding areas that may be potentially affected by ischemic stroke, were selected for calculation of texture parameters. All regions of interest for all subjects were classified as lesional or non-lesional tissue by an expert neuroradiologist. Visual assessment of the discriminant analysis graphs showed differences in the values of texture parameters between patients and controls, and also between texture parameters for lesional and non-lesional tissue of the patients. This suggests that texture analysis can indeed be a useful tool to help neurologists in the early assessment of ischemic stroke and quantification of the extent of the affected areas. (author)

  12. Mobile Imaging and Computing for Intelligent Structural Damage Inspection

    Directory of Open Access Journals (Sweden)

    ZhiQiang Chen

    2014-01-01

    Full Text Available Optical imaging is a commonly used technique in civil engineering for aiding the archival of damage scenes and more recently for image analysis-based damage quantification. However, the limitations are evident when applying optical imaging in the field. The most significant one is the lacking of computing and processing capability in the real time. The advancement of mobile imaging and computing technologies provides a promising opportunity to change this norm. This paper first provides a timely introduction of the state-of-the-art mobile imaging and computing technologies for the purpose of engineering application development. Further we propose a mobile imaging and computing (MIC framework for conducting intelligent condition assessment for constructed objects, which features in situ imaging and real-time damage analysis. This framework synthesizes advanced mobile technologies with three innovative features: (i context-enabled image collection, (ii interactive image preprocessing, and (iii real-time image analysis and analytics. Through performance evaluation and field experiments, this paper demonstrates the feasibility and efficiency of the proposed framework.

  13. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  14. Computer assisted analysis of medical x-ray images

    Science.gov (United States)

    Bengtsson, Ewert

    1996-01-01

    X-rays were originally used to expose film. The early computers did not have enough capacity to handle images with useful resolution. The rapid development of computer technology over the last few decades has, however, led to the introduction of computers into radiology. In this overview paper, the various possible roles of computers in radiology are examined. The state of the art is briefly presented, and some predictions about the future are made.

  15. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    Science.gov (United States)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  16. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  17. Improving Eastern Bluebird nest box performance using computer analysis of satellite images

    Directory of Open Access Journals (Sweden)

    Sarah Svatora

    2012-06-01

    Full Text Available Bird conservationists have been introducing man-made boxes in an effort to increase the bluebird population. In this study we use computer analysis of satellite images to show that the performance of the boxes used by Eastern Bluebirds (Sialia sialis in Michigan can be improved by about 48%. The analysis is based on a strongcorrelation found between the edge directionality measured in the satellite image of the area around the box, and the preferences of the birds when selecting their nesting site. The method is based on satellite images taken from Google Earth, and can be used by conservationists to select a box placement strategy that will optimize the efficacy of the boxes deployed in a given area.

  18. 16th International Conference on Medical Image Computing and Computer Assisted Intervention

    CERN Document Server

    Klinder, Tobias; Li, Shuo

    2014-01-01

    This book contains the full papers presented at the MICCAI 2013 workshop Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together researchers representing several fields, such as Biomechanics, Engineering, Medicine, Mathematics, Physics and Statistic. The works included in this book present and discuss new trends in those fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modelling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis.

  19. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  20. Computational Methods for Nanoscale X-ray Computed Tomography Image Analysis of Fuel Cell and Battery Materials

    Science.gov (United States)

    Kumar, Arjun S.

    Over the last fifteen years, there has been a rapid growth in the use of high resolution X-ray computed tomography (HRXCT) imaging in material science applications. We use it at nanoscale resolutions up to 50 nm (nano-CT) for key research problems in large scale operation of polymer electrolyte membrane fuel cells (PEMFC) and lithium-ion (Li-ion) batteries in automotive applications. PEMFC are clean energy sources that electrochemically react with hydrogen gas to produce water and electricity. To reduce their costs, capturing their electrode nanostructure has become significant in modeling and optimizing their performance. For Li-ion batteries, a key challenge in increasing their scope for the automotive industry is Li metal dendrite growth. Li dendrites are structures of lithium with 100 nm features of interest that can grow chaotically within a battery and eventually lead to a short-circuit. HRXCT imaging is an effective diagnostics tool for such applications as it is a non-destructive method of capturing the 3D internal X-ray absorption coefficient of materials from a large series of 2D X-ray projections. Despite a recent push to use HRXCT for quantitative information on material samples, there is a relative dearth of computational tools in nano-CT image processing and analysis. Hence, we focus on developing computational methods for nano-CT image analysis of fuel cell and battery materials as required by the limitations in material samples and the imaging environment. The first problem we address is the segmentation of nano-CT Zernike phase contrast images. Nano-CT instruments are equipped with Zernike phase contrast optics to distinguish materials with a low difference in X-ray absorption coefficient by phase shifting the X-ray wave that is not diffracted by the sample. However, it creates image artifacts that hinder the use of traditional image segmentation techniques. To restore such images, we setup an inverse problem by modeling the X-ray phase contrast

  1. Progression Analysis and Stage Discovery in Continuous Physiological Processes Using Image Computing

    Directory of Open Access Journals (Sweden)

    Ferrucci Luigi

    2010-01-01

    Full Text Available We propose an image computing-based method for quantitative analysis of continuous physiological processes that can be sensed by medical imaging and demonstrate its application to the analysis of morphological alterations of the bone structure, which correlate with the progression of osteoarthritis (OA. The purpose of the analysis is to quantitatively estimate OA progression in a fashion that can assist in understanding the pathophysiology of the disease. Ultimately, the texture analysis will be able to provide an alternative OA scoring method, which can potentially reflect the progression of the disease in a more direct fashion compared to the existing clinically utilized classification schemes based on radiology. This method can be useful not just for studying the nature of OA, but also for developing and testing the effect of drugs and treatments. While in this paper we demonstrate the application of the method to osteoarthritis, its generality makes it suitable for the analysis of other progressive clinical conditions that can be diagnosed and prognosed by using medical imaging.

  2. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  3. Computational morphology of the lung and its virtual imaging

    International Nuclear Information System (INIS)

    Kitaoka, Hiroko

    2002-01-01

    The author proposes an entirely new approach called 'virtual imaging' of an organ based on 'computational morphology'. Computational morphology describes mathematically design as principles of an organ structure to generate the organ model via computer, which can be called virtual organ. Virtual imaging simulates image data using the virtual organ. The virtual organ is divided into cubic voxels, and the CT value or other intensity value for each voxel is calculated according to the tissue properties within the voxel. The validity of the model is examined by comparing virtual images with clinical images. Computational image analysis methods can be developed based on validated models. In this paper, computational anatomy of the lung and its virtual X-ray imaging are introduced

  4. Computer-aided pulmonary image analysis in small animal models

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J. [Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Bagci, Ulas, E-mail: ulasbagci@gmail.com [Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, Florida 32816 (United States); Kramer-Marek, Gabriela [The Institute of Cancer Research, London SW7 3RP (United Kingdom); Luna, Brian [Microfluidic Laboratory Automation, University of California-Irvine, Irvine, California 92697-2715 (United States); Kubler, Andre [Department of Medicine, Imperial College London, London SW7 2AZ (United Kingdom); Dey, Bappaditya; Jain, Sanjay [Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Foster, Brent [Department of Biomedical Engineering, University of California-Davis, Davis, California 95817 (United States); Papadakis, Georgios Z. [Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Camp, Jeremy V. [Department of Microbiology and Immunology, University of Louisville, Louisville, Kentucky 40202 (United States); Jonsson, Colleen B. [National Institute for Mathematical and Biological Synthesis, University of Tennessee, Knoxville, Tennessee 37996 (United States); Bishai, William R. [Howard Hughes Medical Institute, Chevy Chase, Maryland 20815 and Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Udupa, Jayaram K. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2015-07-15

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.

  5. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.

    Science.gov (United States)

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.

  6. Advances in medical image computing.

    Science.gov (United States)

    Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P

    2009-01-01

    Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.

  7. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  8. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  9. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  10. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    International Nuclear Information System (INIS)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin; Hollingsworth, Alan B.; Qian, Wei

    2015-01-01

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy

  11. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin, E-mail: Bin.Zheng-1@ou.edu [School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States); Hollingsworth, Alan B. [Mercy Women’s Center, Mercy Health Center, Oklahoma City, Oklahoma 73120 (United States); Qian, Wei [Department of Electrical and Computer Engineering, University of Texas, El Paso, Texas 79968 (United States)

    2015-11-15

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.

  12. Computational surgery and dual training computing, robotics and imaging

    CERN Document Server

    Bass, Barbara; Berceli, Scott; Collet, Christophe; Cerveri, Pietro

    2014-01-01

    This critical volume focuses on the use of medical imaging, medical robotics, simulation, and information technology in surgery. It offers a road map for computational surgery success,  discusses the computer-assisted management of disease and surgery, and provides a rational for image processing and diagnostic. This book also presents some advances on image-driven intervention and robotics, as well as evaluates models and simulations for a broad spectrum of cancers as well as cardiovascular, neurological, and bone diseases. Training and performance analysis in surgery assisted by robotic systems is also covered. This book also: ·         Provides a comprehensive overview of the use of computational surgery and disease management ·         Discusses the design and use of medical robotic tools for orthopedic surgery, endoscopic surgery, and prostate surgery ·         Provides practical examples and case studies in the areas of image processing, virtual surgery, and simulation traini...

  13. Computer-assisted image analysis assay of human neutrophil chemotaxis in vitro

    DEFF Research Database (Denmark)

    Jensen, P; Kharazmi, A

    1991-01-01

    We have developed a computer-based image analysis system to measure in-filter migration of human neutrophils in the Boyden chamber. This method is compared with the conventional manual counting techniques. Neutrophils from healthy individuals and from patients with reduced chemotactic activity were....... Another advantage of the assay is that it can be used to show the migration pattern of different populations of neutrophils from both healthy individuals and patients....

  14. Micro-computed tomography imaging and analysis in developmental biology and toxicology.

    Science.gov (United States)

    Wise, L David; Winkelmann, Christopher T; Dogdas, Belma; Bagchi, Ansuman

    2013-06-01

    Micro-computed tomography (micro-CT) is a high resolution imaging technique that has expanded and strengthened in use since it was last reviewed in this journal in 2004. The technology has expanded to include more detailed analysis of bone, as well as soft tissues, by use of various contrast agents. It is increasingly applied to questions in developmental biology and developmental toxicology. Relatively high-throughput protocols now provide a powerful and efficient means to evaluate embryos and fetuses subjected to genetic manipulations or chemical exposures. This review provides an overview of the technology, including scanning, reconstruction, visualization, segmentation, and analysis of micro-CT generated images. This is followed by a review of more recent applications of the technology in some common laboratory species that highlight the diverse issues that can be addressed. Copyright © 2013 Wiley Periodicals, Inc.

  15. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners

    Science.gov (United States)

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results. PMID:24083133

  16. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  17. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  18. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the "i-ROP" System and Image Features Associated With Expert Diagnosis.

    Science.gov (United States)

    Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Campbell, J Peter; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir; Jonas, Karyn; Chan, R V Paul; Ostmo, Susan; Chiang, Michael F

    2015-11-01

    We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.

  19. System Matrix Analysis for Computed Tomography Imaging

    Science.gov (United States)

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  20. Automatic analysis of digitized TV-images by a computer-driven optical microscope

    International Nuclear Information System (INIS)

    Rosa, G.; Di Bartolomeo, A.; Grella, G.; Romano, G.

    1997-01-01

    New methods of image analysis and three-dimensional pattern recognition were developed in order to perform the automatic scan of nuclear emulsion pellicles. An optical microscope, with a motorized stage, was equipped with a CCD camera and an image digitizer, and interfaced to a personal computer. Selected software routines inspired the design of a dedicated hardware processor. Fast operation, high efficiency and accuracy were achieved. First applications to high-energy physics experiments are reported. Further improvements are in progress, based on a high-resolution fast CCD camera and on programmable digital signal processors. Applications to other research fields are envisaged. (orig.)

  1. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  2. Knowledge-based image analysis: some aspects on the analysis of images using other types of information

    Energy Technology Data Exchange (ETDEWEB)

    Eklundh, J O

    1982-01-01

    The computer vision approach to image analysis is discussed from two aspects. First, this approach is constrasted to the pattern recognition approach. Second, how external knowledge and information and models from other fields of science and engineering can be used for image and scene analysis is discussed. In particular, the connections between computer vision and computer graphics are pointed out.

  3. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  4. Image processing by computer analysis--potential use and application in civil and criminal litigation.

    Science.gov (United States)

    Wilson, T W

    1990-01-01

    The image processing by computer analysis has established a data base for applications in the industrial world. Testing has proved that the same system can provide documentation and evidence in all facets of modern day life. The medicolegal aspects in civil and criminal litigation are no exception. The primary function of the image processing system is to derive all of the information available from the image being processed. The process will extract this information in an unbiased manner, based solely on the physics of reflected light energy. The computer will analyze this information and present it in pictorial form, with mathematical data to support the form presented. This information can be presented in the courtroom with full credibility as an unbiased, reliable witness. New scientific techniques shown in the courtroom are subject to their validity being proven. Past imaging techniques shown in the courtroom have made the conventional rules of evidence more difficult because of the different informational content and format required for presentation of these data. I believe the manner in which the evidence can now be presented in pictorial form will simplify the acceptance. Everyone, including the layman, the judge, and the jury, will be able to identify and understand the implications of the before and after changes to the image being presented. In this article, I have mentioned just some of the ways in which image processing by computer analysis can be useful in civil and criminal litigation areas: existing photographic evidence; forensic reconstruction; correlation of effect evidence with cause of evidence; medical records as legal protection; providing evidence of circumstance of death; child abuse, with tracking over time to prevent death; investigation of operating room associated deaths; detection of blood at the scene of the crime and on suspected objects; use of scales at the scene of the crime; providing medicolegal evidence beyond today

  5. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Independent component analysis of dynamic contrast-enhanced computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)

    2006-10-07

    Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)

  7. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  8. Legal issues of computer imaging in plastic surgery: a primer.

    Science.gov (United States)

    Chávez, A E; Dagum, P; Koch, R J; Newman, J P

    1997-11-01

    Although plastic surgeons are increasingly incorporating computer imaging techniques into their practices, many fear the possibility of legally binding themselves to achieve surgical results identical to those reflected in computer images. Computer imaging allows surgeons to manipulate digital photographs of patients to project possible surgical outcomes. Some of the many benefits imaging techniques pose include improving doctor-patient communication, facilitating the education and training of residents, and reducing administrative and storage costs. Despite the many advantages computer imaging systems offer, however, surgeons understandably worry that imaging systems expose them to immense legal liability. The possible exploitation of computer imaging by novice surgeons as a marketing tool, coupled with the lack of consensus regarding the treatment of computer images, adds to the concern of surgeons. A careful analysis of the law, however, reveals that surgeons who use computer imaging carefully and conservatively, and adopt a few simple precautions, substantially reduce their vulnerability to legal claims. In particular, surgeons face possible claims of implied contract, failure to instruct, and malpractice from their use or failure to use computer imaging. Nevertheless, legal and practical obstacles frustrate each of those causes of actions. Moreover, surgeons who incorporate a few simple safeguards into their practice may further reduce their legal susceptibility.

  9. Survival Prediction in Pancreatic Ductal Adenocarcinoma by Quantitative Computed Tomography Image Analysis.

    Science.gov (United States)

    Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K

    2018-04-01

    Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.

  10. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    Science.gov (United States)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-09-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  11. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  12. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  13. Multifractal Analysis of Seismically Induced Soft-Sediment Deformation Structures Imaged by X-Ray Computed Tomography

    Science.gov (United States)

    Nakashima, Yoshito; Komatsubara, Junko

    Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.

  14. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  15. A Stochastic Approach for Blurred Image Restoration and Optical Flow Computation on Field Image Sequence

    Institute of Scientific and Technical Information of China (English)

    高文; 陈熙霖

    1997-01-01

    The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.

  16. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  17. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  18. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  19. Imaging and computational considerations for image computed permeability: Operating envelope of Digital Rock Physics

    Science.gov (United States)

    Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias

    2018-06-01

    This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.

  20. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  1. A comparative study of three-dimensional reconstructive images of temporomandibular joint using computed tomogram

    International Nuclear Information System (INIS)

    Lim, Suk Young; Koh, Kwang Joon

    1993-01-01

    The purpose of this study was to clarify the spatial relationship of temporomandibular joint and to an aid in the diagnosis of temporomandibular disorder. For this study, three-dimensional images of normal temporomandibular joint were reconstructed by computer image analysis system and three-dimensional reconstructive program integrated in computed tomography. The obtained results were as follows : 1. Two-dimensional computed tomograms had the better resolution than three dimensional computed tomograms in the evaluation of bone structure and the disk of TMJ. 2. Direct sagittal computed tomograms and coronal computed tomograms had the better resolution in the evaluation of the disk of TMJ. 3. The positional relationship of the disk could be visualized, but the configuration of the disk could not be clearly visualized on three-dimensional reconstructive CT images. 4. Three-dimensional reconstructive CT images had the smoother margin than three-dimensional images reconstructed by computer image analysis system, but the images of the latter had the better perspective. 5. Three-dimensional reconstructive images had the better spatial relationship of the TMJ articulation, and the joint space were more clearly visualized on dissection images.

  2. [Computer-aided Prognosis for Breast Cancer Based on Hematoxylin & Eosin Histopathology Image].

    Science.gov (United States)

    Chen, Jiamei; Qu, Aiping; Liu, Wenlou; Wang, Linwei; Yuan, Jingping; Liu, Juan; Li, Yan

    2016-06-01

    Quantitatively analyzing hematoxylin &eosin(H&E)histopathology images is an emerging field attracting increasing attentions in recent years.This paper reviews the application of computer-aided image analysis in breast cancer prognosis.The traditional prognosis based on H&E histopathology image for breast cancer is firstly sketched,followed by a detailed description of the workflow of computer-aided prognosis including image acquisition,image preprocessing,regions of interest detection and object segmentation,feature extraction,and computer-aided prognosis.In the end,major technical challenges and future directions in this field are summarized.

  3. Teaching image analysis at DIKU

    DEFF Research Database (Denmark)

    Johansen, Peter

    2010-01-01

    The early development of computer vision at Department of Computer Science at University of Copenhagen (DIKU) is briefly described. The different disciplines in computer vision are introduced, and the principles for teaching two courses, an image analysis course, and a robot lab class are outlined....

  4. Dental computed tomographic imaging as age estimation: morphological analysis of the third molar of a group of Turkish population.

    Science.gov (United States)

    Cantekin, Kenan; Sekerci, Ahmet Ercan; Buyuk, Suleyman Kutalmis

    2013-12-01

    Computed tomography (CT) is capable of providing accurate and measurable 3-dimensional images of the third molar. The aims of this study were to analyze the development of the mandibular third molar and its relation to chronological age and to create new reference data for a group of Turkish participants aged 9 to 25 years on the basis of cone-beam CT images. All data were obtained from the patients' records including medical, social, and dental anamnesis and cone-beam CT images of 752 patients. Linear regression analysis was performed to obtain regression formulas for dental age calculation with chronological age and to determine the coefficient of determination (r) for each sex. Statistical analysis showed a strong correlation between age and third-molar development for the males (r2 = 0.80) and the females (r2 = 0.78). Computed tomographic images are clinically useful for accurate and reliable estimation of dental ages of children and youth.

  5. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  6. Image Post-Processing and Analysis. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Yushkevich, P. A. [University of Pennsylvania, Philadelphia (United States)

    2014-09-15

    For decades, scientists have used computers to enhance and analyse medical images. At first, they developed simple computer algorithms to enhance the appearance of interesting features in images, helping humans read and interpret them better. Later, they created more advanced algorithms, where the computer would not only enhance images but also participate in facilitating understanding of their content. Segmentation algorithms were developed to detect and extract specific anatomical objects in images, such as malignant lesions in mammograms. Registration algorithms were developed to align images of different modalities and to find corresponding anatomical locations in images from different subjects. These algorithms have made computer aided detection and diagnosis, computer guided surgery and other highly complex medical technologies possible. Nowadays, the field of image processing and analysis is a complex branch of science that lies at the intersection of applied mathematics, computer science, physics, statistics and biomedical sciences. This chapter will give a general overview of the most common problems in this field and the algorithms that address them.

  7. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    International Nuclear Information System (INIS)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-01-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 x 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver (p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  8. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.

    Science.gov (United States)

    Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan

    2014-10-03

    One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.

  9. KAMEDIN: a telemedicine system for computer supported cooperative work and remote image analysis in radiology.

    Science.gov (United States)

    Handels, H; Busch, C; Encarnação, J; Hahn, C; Kühn, V; Miehe, J; Pöppl, S I; Rinast, E; Rossmanith, C; Seibert, F; Will, A

    1997-03-01

    The software system KAMEDIN (Kooperatives Arbeiten und MEdizinische Diagnostik auf Innovativen Netzen) is a multimedia telemedicine system for exchange, cooperative diagnostics, and remote analysis of digital medical image data. It provides components for visualisation, processing, and synchronised audio-visual discussion of medical images. Techniques of computer supported cooperative work (CSCW) synchronise user interactions during a teleconference. Visibility of both local and remote cursor on the conference workstations facilitates telepointing and reinforces the conference partner's telepresence. Audio communication during teleconferences is supported by an integrated audio component. Furthermore, brain tissue segmentation with artificial neural networks can be performed on an external supercomputer as a remote image analysis procedure. KAMEDIN is designed as a low cost CSCW tool for ISDN based telecommunication. However it can be used on any TCP/IP supporting network. In a field test, KAMEDIN was installed in 15 clinics and medical departments to validate the systems' usability. The telemedicine system KAMEDIN has been developed, tested, and evaluated within a research project sponsored by German Telekom.

  10. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  11. Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process

    Science.gov (United States)

    Dominquez, Jesus A.; Klinko, Steve J.

    2007-01-01

    Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.

  12. SU-F-R-18: Updates to the Computational Environment for Radiological Research for Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Apte, Aditya P.; Deasy, Joseph O. [Memorial Sloan Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To present new tools in CERR for Texture Analysis and Visualization. Method: (1) Quantitative Image Analysis: We added the ability to compute Haralick texture features based on local neighbourhood. The Texture features depend on many parameters used in their derivation. For example: (a) directionality, (b) quantization of image, (c) patch-size for the neighborhood, (d) handling of the edge voxels within the region of interest, (e) Averaging co-occurance matrix vs texture features for different directions etc. A graphical user interface was built to set these parameters and then visualize their impact on the resulting texture maps. The entire functionality was written in Matlab. Array indexing was used to speed up the texture calculation. The computation speed is very competitive with the ITK library. Moreover, our implementation works with multiple CPUs and the computation time can be further reduced by using multiple processor threads. In order to reduce the Haralick texture maps into scalar features, we propose the use of Texture Volume Histograms. This lets users make use of the entire distribution of texture values within the region of interest rather than using just the mean and the standard deviations. (2) Qualitative/Visualization tools: The derived texture maps are stored as a new scan (derived) within CERR’s planC data structure. A display that compares various scans was built to show the raw image and the derived texture maps side-by-side. These images are positionally linked and can be navigated together. CERR’s graphics handling was updated and sped-up to be compatible with the newer Matlab versions. As a result, the users can use (a) different window levels and colormaps for different viewports, (b) click-and-drag or use mouse scroll-wheel to navigate slices. Results: The new features and updates are available via https://www.github.com/adityaapte/cerr . Conclusion: Features added to CERR increase its utility in Radiomics and Outcomes

  13. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography

    Science.gov (United States)

    Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan

    2014-01-01

    Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824

  14. Architectures for single-chip image computing

    Science.gov (United States)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  15. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  16. Computational Ghost Imaging for Remote Sensing

    Science.gov (United States)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In

  17. Computational information geometry for image and signal processing

    CERN Document Server

    Critchley, Frank; Dodson, Christopher

    2017-01-01

    This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.

  18. Detection of Organophosphorus Pesticides with Colorimetry and Computer Image Analysis.

    Science.gov (United States)

    Li, Yanjie; Hou, Changjun; Lei, Jincan; Deng, Bo; Huang, Jing; Yang, Mei

    2016-01-01

    Organophosphorus pesticides (OPs) represent a very important class of pesticides that are widely used in agriculture because of their relatively high-performance and moderate environmental persistence, hence the sensitive and specific detection of OPs is highly significant. Based on the inhibitory effect of acetylcholinesterase (AChE) induced by inhibitors, including OPs and carbamates, a colorimetric analysis was used for detection of OPs with computer image analysis of color density in CMYK (cyan, magenta, yellow and black) color space and non-linear modeling. The results showed that there was a gradually weakened trend of yellow intensity with the increase of the concentration of dichlorvos. The quantitative analysis of dichlorvos was achieved by Artificial Neural Network (ANN) modeling, and the results showed that the established model had a good predictive ability between training sets and predictive sets. Real cabbage samples containing dichlorvos were detected by colorimetry and gas chromatography (GC), respectively. The results showed that there was no significant difference between colorimetry and GC (P > 0.05). The experiments of accuracy, precision and repeatability revealed good performance for detection of OPs. AChE can also be inhibited by carbamates, and therefore this method has potential applications in real samples for OPs and carbamates because of high selectivity and sensitivity.

  19. Affective Computing used in an imaging interaction paradigm

    DEFF Research Database (Denmark)

    Schultz, Nette

    2003-01-01

    This paper combines affective computing with an imaging interaction paradigm. An imaging interaction paradigm means that human and computer communicates primarily by images. Images evoke emotions in humans, so the computer must be able to behave emotionally intelligent. An affective image selection...

  20. Quantitative image analysis of vertebral body architecture - improved diagnosis in osteoporosis based on high-resolution computed tomography

    International Nuclear Information System (INIS)

    Mundinger, A.; Wiesmeier, B.; Dinkel, E.; Helwig, A.; Beck, A.; Schulte Moenting, J.

    1993-01-01

    71 women, 64 post-menopausal, were examined by single-energy quantitative computed tomography (SEQCT) and by high-resolution computed tomography (HRCT) scans through the middle of lumbar vertebral bodies. Computer-assisted image analysis of the high-resolution images assessed trabecular morphometry of the vertebral spongiosa texture. Texture parameters differed in women with and without age-reduced bone density, and in the former group also in patients with and without vertebral fractures. Discriminating parameters were the total number, diameter and variance of trabecular and intertrabecular spaces as well as the trabecular surface (p < 0.05)). A texture index based on these statistically selected morphometric parameters identified a subgroup of patients suffering from fractures due to abnormal spongiosal architecture but with a bone mineral content not indicative for increased fracture risk. The combination of osteodensitometric and trabecular morphometry improves the diagnosis of osteoporosis and may contribute to the prediction of individual fracture risk. (author)

  1. VELOCITY FIELD COMPUTATION IN VIBRATED GRANULAR MEDIA USING AN OPTICAL FLOW BASED MULTISCALE IMAGE ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    Johan Debayle

    2011-05-01

    Full Text Available An image analysis method has been developed in order to compute the velocity field of a granular medium (sand grains, mean diameter 600 μm submitted to different kinds of mechanical stresses. The differential method based on optical flow conservation consists in describing a dense motion field with vectors associated to each pixel. A multiscale, coarse-to-fine, analytical approach through tailor sized windows yields the best compromise between accuracy and robustness of the results, while enabling an acceptable computation time. The corresponding algorithmis presented and its validation discussed through different tests. The results of the validation tests of the proposed approach show that the method is satisfactory when attributing specific values to parameters in association with the size of the image analysis window. An application in the case of vibrated sand has been studied. An instrumented laboratory device provides sinusoidal vibrations and enables external optical observations of sand motion in 3D transparent boxes. At 50 Hz, by increasing the relative acceleration G, the onset and development of two convective rolls can be observed. An ultra fast camera records the grain avalanches, and several pairs of images are analysed by the proposed method. The vertical velocity profiles are deduced and allow to precisely quantify the dimensions of the fluidized region as a function of G.

  2. Automatic system for quantification and visualization of lung aeration on chest computed tomography images: the Lung Image System Analysis - LISA

    International Nuclear Information System (INIS)

    Felix, John Hebert da Silva; Cortez, Paulo Cesar; Holanda, Marcelo Alcantara

    2010-01-01

    High Resolution Computed Tomography (HRCT) is the exam of choice for the diagnostic evaluation of lung parenchyma diseases. There is an increasing interest for computational systems able to automatically analyze the radiological densities of the lungs in CT images. The main objective of this study is to present a system for the automatic quantification and visualization of the lung aeration in HRCT images of different degrees of aeration, called Lung Image System Analysis (LISA). The secondary objective is to compare LISA to the Osiris system and also to specific algorithm lung segmentation (ALS), on the accuracy of the lungs segmentation. The LISA system automatically extracts the following image attributes: lungs perimeter, cross sectional area, volume, the radiological densities histograms, the mean lung density (MLD) in Hounsfield units (HU), the relative area of the lungs with voxels with density values lower than -950 HU (RA950) and the 15th percentile of the least density voxels (PERC15). Furthermore, LISA has a colored mask algorithm that applies pseudo-colors to the lung parenchyma according to the pre-defined radiological density chosen by the system user. The lungs segmentations of 102 images of 8 healthy volunteers and 141 images of 11 patients with Chronic Obstructive Pulmonary Disease (COPD) were compared on the accuracy and concordance among the three methods. The LISA was more effective on lungs segmentation than the other two methods. LISA's color mask tool improves the spatial visualization of the degrees of lung aeration and the various attributes of the image that can be extracted may help physicians and researchers to better assess lung aeration both quantitatively and qualitatively. LISA may have important clinical and research applications on the assessment of global and regional lung aeration and therefore deserves further developments and validation studies. (author)

  3. Automatic system for quantification and visualization of lung aeration on chest computed tomography images: the Lung Image System Analysis - LISA

    Energy Technology Data Exchange (ETDEWEB)

    Felix, John Hebert da Silva; Cortez, Paulo Cesar, E-mail: jhsfelix@gmail.co [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Dept. de Engenharia de Teleinformatica; Holanda, Marcelo Alcantara [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Hospital Universitario Walter Cantidio. Dept. de Medicina Clinica

    2010-12-15

    High Resolution Computed Tomography (HRCT) is the exam of choice for the diagnostic evaluation of lung parenchyma diseases. There is an increasing interest for computational systems able to automatically analyze the radiological densities of the lungs in CT images. The main objective of this study is to present a system for the automatic quantification and visualization of the lung aeration in HRCT images of different degrees of aeration, called Lung Image System Analysis (LISA). The secondary objective is to compare LISA to the Osiris system and also to specific algorithm lung segmentation (ALS), on the accuracy of the lungs segmentation. The LISA system automatically extracts the following image attributes: lungs perimeter, cross sectional area, volume, the radiological densities histograms, the mean lung density (MLD) in Hounsfield units (HU), the relative area of the lungs with voxels with density values lower than -950 HU (RA950) and the 15th percentile of the least density voxels (PERC15). Furthermore, LISA has a colored mask algorithm that applies pseudo-colors to the lung parenchyma according to the pre-defined radiological density chosen by the system user. The lungs segmentations of 102 images of 8 healthy volunteers and 141 images of 11 patients with Chronic Obstructive Pulmonary Disease (COPD) were compared on the accuracy and concordance among the three methods. The LISA was more effective on lungs segmentation than the other two methods. LISA's color mask tool improves the spatial visualization of the degrees of lung aeration and the various attributes of the image that can be extracted may help physicians and researchers to better assess lung aeration both quantitatively and qualitatively. LISA may have important clinical and research applications on the assessment of global and regional lung aeration and therefore deserves further developments and validation studies. (author)

  4. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  5. Analysis of Craniofacial Images using Computational Atlases and Deformation Fields

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur

    2008-01-01

    purposes. The basis for most of the applications is non-rigid image registration. This approach brings one image into the coordinate system of another resulting in a deformation field describing the anatomical correspondence between the two images. A computational atlas representing the average anatomy...... of asymmetry. The analyses are applied to the study of three different craniofacial anomalies. The craniofacial applications include studies of Crouzon syndrome (in mice), unicoronal synostosis plagiocephaly and deformational plagiocephaly. Using the proposed methods, the thesis reveals novel findings about...... the craniofacial morphology and asymmetry of Crouzon mice. Moreover, a method to plan and evaluate treatment of children with deformational plagiocephaly, based on asymmetry assessment, is established. Finally, asymmetry in children with unicoronal synostosis is automatically assessed, confirming previous results...

  6. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  7. Dancing Styles of Collective Cell Migration: Image-Based Computational Analysis of JRAB/MICAL-L2.

    Science.gov (United States)

    Sakane, Ayuko; Yoshizawa, Shin; Yokota, Hideo; Sasaki, Takuya

    2018-01-01

    Collective cell migration is observed during morphogenesis, angiogenesis, and wound healing, and this type of cell migration also contributes to efficient metastasis in some kinds of cancers. Because collectively migrating cells are much better organized than a random assemblage of individual cells, there seems to be a kind of order in migrating clusters. Extensive research has identified a large number of molecules involved in collective cell migration, and these factors have been analyzed using dramatic advances in imaging technology. To date, however, it remains unclear how myriad cells are integrated as a single unit. Recently, we observed unbalanced collective cell migrations that can be likened to either precision dancing or awa-odori , Japanese traditional dancing similar to the style at Rio Carnival, caused by the impairment of the conformational change of JRAB/MICAL-L2. This review begins with a brief history of image-based computational analyses on cell migration, explains why quantitative analysis of the stylization of collective cell behavior is difficult, and finally introduces our recent work on JRAB/MICAL-L2 as a successful example of the multidisciplinary approach combining cell biology, live imaging, and computational biology. In combination, these methods have enabled quantitative evaluations of the "dancing style" of collective cell migration.

  8. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  9. Soil structure characterized using computed tomographic images

    Science.gov (United States)

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  10. Application of automatic image analysis in wood science

    Science.gov (United States)

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  11. [Computational medical imaging (radiomics) and potential for immuno-oncology].

    Science.gov (United States)

    Sun, R; Limkin, E J; Dercle, L; Reuzé, S; Zacharaki, E I; Chargari, C; Schernberg, A; Dirand, A S; Alexis, A; Paragios, N; Deutsch, É; Ferté, C; Robert, C

    2017-10-01

    The arrival of immunotherapy has profoundly changed the management of multiple cancers, obtaining unexpected tumour responses. However, until now, the majority of patients do not respond to these new treatments. The identification of biomarkers to determine precociously responding patients is a major challenge. Computational medical imaging (also known as radiomics) is a promising and rapidly growing discipline. This new approach consists in the analysis of high-dimensional data extracted from medical imaging, to further describe tumour phenotypes. This approach has the advantages of being non-invasive, capable of evaluating the tumour and its microenvironment in their entirety, thus characterising spatial heterogeneity, and being easily repeatable over time. The end goal of radiomics is to determine imaging biomarkers as decision support tools for clinical practice and to facilitate better understanding of cancer biology, allowing the assessment of the changes throughout the evolution of the disease and the therapeutic sequence. This review will develop the process of computational imaging analysis and present its potential in immuno-oncology. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  12. Comparison of Volumes between Four-Dimensional Computed Tomography and Cone-Beam Computed Tomography Images using Dynamic Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seong Eun; Won, Hui Su; Hong, Joo Wan; Chang, Nam Jun; Jung, Woo Hyun; Choi, Byeong Don [Dept. of Radiation Oncology, Seoul National University Bundang Hospital, Sungnam (Korea, Republic of)

    2016-12-15

    The aim of this study was to compare the differences between the volumes acquired with four-dimensional computed tomography (4DCT)images with a reconstruction image-filtering algorithm and cone-beam computed tomography (CBCT) images with dynamic phantom. The 4DCT images were obtained from the computerized imaging reference systems (CIRS) phantom using a computed tomography (CT) simulator. We analyzed the volumes for maximum intensity projection (MIP), minimum intensity projection (MinIP) and average intensity projection (AVG) of the images obtained with the 4DCT scanner against those acquired from CBCT images with CT ranger tools. Difference in volume for node of 1, 2 and 3 cm between CBCT and 4DCT was 0.54⁓2.33, 5.16⁓8.06, 9.03⁓20.11 ml in MIP, respectively, 0.00⁓1.48, 0.00⁓8.47, 1.42⁓24.85 ml in MinIP, respectively and 0.00⁓1.17, 0.00⁓2.19, 0.04⁓3.35 ml in AVG, respectively. After a comparative analysis of the volumes for each nodal size, it was apparent that the CBCT images were similar to the AVG images acquired using 4DCT.

  13. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  14. Facial Image Analysis in Anthropology: A Review

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 49, č. 2 (2011), s. 141-153 ISSN 0323-1119 Institutional support: RVO:67985807 Keywords : face * computer-assisted methods * template matching * geometric morphopetrics * robust image analysis Subject RIV: IN - Informatics, Computer Science

  15. Construction of computational models for the stress analysis of the bones using CT imaging: application in the gleno-humeral joint

    International Nuclear Information System (INIS)

    Cisilino, Adrian; D'Amico, Diego; Buroni, Federico; Commisso, Pablo; Sammartino, Mario; Capiel, Carlos

    2008-01-01

    A methodology for the construction of computational models from CT images is presented in this work. Computational models serve for the stress analysis of the bones using the Finite Element Method. The elastic constants of the bone tissue are calculated using the density data obtained in from the CTs. The proposed methodology is demonstrated in the construction of a model for the gleno-humeral joint. (authors) [es

  16. Metasurface optics for full-color computational imaging.

    Science.gov (United States)

    Colburn, Shane; Zhan, Alan; Majumdar, Arka

    2018-02-01

    Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.

  17. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    Directory of Open Access Journals (Sweden)

    Nicolas eRey-Villamizar

    2014-04-01

    Full Text Available In this article, we describe use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis task, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral brain tissue images surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels, 6,000$times$10,000$times$500 voxels with 16 bits/voxel, implying image sizes exceeding 250GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analytics for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment consisting. Our Python script enables efficient data storage and movement between compute and storage servers, logging all processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  18. Dynamic time-dependent analysis and static three-dimensional imaging procedures for computer-assisted CNS studies

    International Nuclear Information System (INIS)

    Budinger, T.F.; DeLand, F.H.; Duggan, H.E.; Bouz, J.J.; Hoop, B. Jr.; McLaughlin, W.T.; Weber, P.M.

    1975-01-01

    Two-dimensional computer image-processing techniques have not proved to be of importance in diagnostic nuclear medicine primarily because the radionuclide distribution represents a three-dimensional problem. More recent developments in three-dimensional reconstruction from multiple views or multiple detectors promise to overcome the major limitations in previous work with digital computers. These techniques are now in clinical use for static imaging; however, speed limitations have prevented application to dynamic imaging. The future development of these methods will require innovations in patient positioning and multiple-view devices for either single-gamma or positron annihilation detection

  19. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  20. Correction for polychromatic aberration in computed tomography images

    International Nuclear Information System (INIS)

    Naparstek, A.

    1979-01-01

    A method and apparatus for correcting a computed tomography image for polychromatic aberration caused by the non-linear interaction (i.e. the energy dependent attenuation characteristics) of different body constituents, such as bone and soft tissue, with a polychromatic X-ray beam are described in detail. An initial image is conventionally computed from path measurements made as source and detector assembly scan a body section. In the improvement, each image element of the initial computed image representing attenuation is recorded in a store and is compared with two thresholds, one representing bone and the other soft tissue. Depending on the element value relative to the thresholds, a proportion of the respective constituent is allocated to that element location and corresponding bone and soft tissue projections are determined and stored. An error projection generator calculates projections of polychromatic aberration errors in the raw image data from recalled bone and tissue projections using a multidimensional polynomial function which approximates the non-linear interaction involved. After filtering, these are supplied to an image reconstruction computer to compute image element correction values which are subtracted from raw image element values to provide a corrected reconstructed image for display. (author)

  1. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  2. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  3. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    Science.gov (United States)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  4. Recent advances in transient imaging: A computer graphics and vision perspective

    Directory of Open Access Journals (Sweden)

    Adrian Jarabo

    2017-03-01

    Full Text Available Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation. Keywords: Transient imaging, Ultrafast imaging, Time-of-flight

  5. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    Science.gov (United States)

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current

  6. A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.

    Science.gov (United States)

    Tse, George H; Marson, Lorna P

    2013-10-01

    Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.

  7. Computer-aided diagnosis and artificial intelligence in clinical imaging.

    Science.gov (United States)

    Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio

    2011-11-01

    Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and

  8. Computer-aided diagnosis in radiological imaging: current status and future challenges

    Science.gov (United States)

    Doi, Kunio

    2009-10-01

    Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. Many different types of CAD schemes are being developed for detection and/or characterization of various lesions in medical imaging, including conventional projection radiography, CT, MRI, and ultrasound imaging. Commercial systems for detection of breast lesions on mammograms have been developed and have received FDA approval for clinical use. CAD may be defined as a diagnosis made by a physician who takes into account the computer output as a "second opinion". The purpose of CAD is to improve the quality and productivity of physicians in their interpretation of radiologic images. The quality of their work can be improved in terms of the accuracy and consistency of their radiologic diagnoses. In addition, the productivity of radiologists is expected to be improved by a reduction in the time required for their image readings. The computer output is derived from quantitative analysis of radiologic images by use of various methods and techniques in computer vision, artificial intelligence, and artificial neural networks (ANNs). The computer output may indicate a number of important parameters, for example, the locations of potential lesions such as lung cancer and breast cancer, the likelihood of malignancy of detected lesions, and the likelihood of various diseases based on differential diagnosis in a given image and clinical parameters. In this review article, the basic concept of CAD is first defined, and the current status of CAD research is then described. In addition, the potential of CAD in the future is discussed and predicted.

  9. Three-dimensional analysis and display of medical images

    International Nuclear Information System (INIS)

    Bajcsy, R.

    1985-01-01

    Until recently, the most common medical images were X-rays on film analyzed by an expert, ususally a radiologist, who used, in addition to his/her visual perceptual abilities, knowledge obtained through medical studies, and experience. Today, however, with the advent of various imaging techniques, X-ray computerized axial tomographs (CAT), positron emission tomographs (PET), ultrasound tomographs, nuclear magnetic resonance tomographs (NMR), just to mention a few, the images are generated by computers and displayed on computer-controlled devices; so it is appropriate to think about more quantitative and perhaps automated ways of data analysis. Furthermore, since the data are generated by computer, it is only natural to take advantage of the computer for analysis purposes. In addition, using the computer, one can analyze more data and relate different modalities from the same subject, such as, for example, comparing the CAT images with PET images from the same subject. In the next section (The PET Scanner) the authors shall only briefly mention with appropriate references the modeling of the positron emission tomographic scanner, since this imaging technique is not as widely described in the literature as the CAT scanner. The modeling of the interpreter is not going to be mentioned, since it is a topic that by itself deserves a full paper; see, for example, Pizer [1981]. The thrust of this chapter is on modeling the organs that are being imaged and the matching techniques between the model and the data. The image data is from CAT and PET scans. Although the authors believe that their techniques are applicable to any organ of the human body, the examples are only from the brain

  10. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    Science.gov (United States)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  11. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  12. Jet-images: computer vision inspired techniques for jet tagging

    International Nuclear Information System (INIS)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel

    2015-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  13. Image viewing station for MR and SPECT : using personal computer

    International Nuclear Information System (INIS)

    Yim, Byung Il; Jeong, Eun Kee; Suh, Jin Suck; Kim, Myeong Joon

    1996-01-01

    Macro language was programmed to analyze and process on Macintosh personal computers, GEMR images digitally transferred from the MR main computer, with special interest in the interpretation of information such as patients data and imaging parameters under each image header. By this method, raw data(files) of certain patients may be digitally stored on a hard disk or CD ROM, and the quantitative analysis, interpretation and display is possible. Patients and images were randomly selected 4.X MR images were transferred through FTP using the ethernet network. 5.X and SPECT images were transferred using floppy diskettes. To process transferred images, an freely distributed software for Macintosh namely NIH Image, with its macro language, was used to import images and translate header information. To identify necessary information, a separate window named I nfo=txt , was made for each image series. MacLC, Centris650, and PowerMac6100/CD, 7100/CD, 8100/CD models with 256 color and RAM over 8Mbyte were used. Different versions of MR images and SPECT images were displayed simultaneously and a separate window named 'info-txt' was used to show all necessary information(name of the patient, unit number, date, TR, TE, FOV etc.). Additional information(diagnosis, pathologic report etc.) was stored in another text box in 'info-txt'. The size of the file for each image plane was about 149Kbytes and the images were stored in a step-like file folders. 4.X and 5.X GE Signa 1.5T images were successfully processed with Macintosh computer and NIH Image. This result may be applied to many fields and there is hope of a broader area of application with the linkage of NIH Image and a database program

  14. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  15. Multimodality image analysis work station

    International Nuclear Information System (INIS)

    Ratib, O.; Huang, H.K.

    1989-01-01

    The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans

  16. Transfer function analysis of radiographic imaging systems

    International Nuclear Information System (INIS)

    Metz, C.E.; Doi, K.

    1979-01-01

    The theoretical and experimental aspects of the techniques of transfer function analysis used in radiographic imaging systems are reviewed. The mathematical principles of transfer function analysis are developed for linear, shift-invariant imaging systems, for the relation between object and image and for the image due to a sinusoidal plane wave object. The other basic mathematical principle discussed is 'Fourier analysis' and its application to an input function. Other aspects of transfer function analysis included are alternative expressions for the 'optical transfer function' of imaging systems and expressions are derived for both serial and parallel transfer image sub-systems. The applications of transfer function analysis to radiographic imaging systems are discussed in relation to the linearisation of the radiographic imaging system, the object, the geometrical unsharpness, the screen-film system unsharpness, other unsharpness effects and finally noise analysis. It is concluded that extensive theoretical, computer simulation and experimental studies have demonstrated that the techniques of transfer function analysis provide an accurate and reliable means for predicting and understanding the effects of various radiographic imaging system components in most practical diagnostic medical imaging situations. (U.K.)

  17. Wave-equation Migration Velocity Analysis Using Plane-wave Common Image Gathers

    KAUST Repository

    Guo, Bowen; Schuster, Gerard T.

    2017-01-01

    Wave-equation migration velocity analysis (WEMVA) based on subsurface-offset, angle domain or time-lag common image gathers (CIGs) requires significant computational and memory resources because it computes higher dimensional migration images

  18. Computed tomography and three-dimensional imaging

    International Nuclear Information System (INIS)

    Harris, L.D.; Ritman, E.L.; Robb, R.A.

    1987-01-01

    Presented here is a brief introduction to two-, three-, and four-dimensional computed tomography. More detailed descriptions of the mathematics of reconstruction and of CT scanner operation are presented elsewhere. The complementary tomographic imaging methods of single-photon-emission tomography (SPECT) positron-emission tomography (PET), nuclear magnetic resonance (NMR) imaging, ulltrasound sector scanning, and ulltrasound computer-assisted tomography [UCAT] are only named here. Each imaging modality ''probes'' the body with a different energy form, yielding unique and useful information about tomographic sections through the body

  19. Computational acceleration for MR image reconstruction in partially parallel imaging.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images.

  20. Computers are stepping stones to improved imaging.

    Science.gov (United States)

    Freiherr, G

    1991-02-01

    Never before has the radiology industry embraced the computer with such enthusiasm. Graphics supercomputers as well as UNIX- and RISC-based computing platforms are turning up in every digital imaging modality and especially in systems designed to enhance and transmit images, says author Greg Freiherr on assignment for Computers in Healthcare at the Radiological Society of North America conference in Chicago.

  1. Micro-computer system for quantitative image analysis of damage microstructure

    International Nuclear Information System (INIS)

    Kohyama, A.; Kohno, Y.; Satoh, K.; Igata, N.

    1984-01-01

    Quantitative image analysis of radiation induced damage microstructure is very important in evaluating material behaviors in radiation environment. But, quite a few improvement have been seen in quantitative analysis of damage microstructure in these decades. The objective of this work is to develop new system for quantitative image analysis of damage microstructure which could improve accuracy and efficiency of data sampling and processing and could enable to get new information about mutual relations among dislocations, precipitates, cavities, grain boundaries, etc. In this system, data sampling is done with X-Y digitizer. The cavity microstructure in dual-ion irradiated 316 SS is analyzed and the effectiveness of this system is discussed. (orig.)

  2. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  3. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  4. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    Science.gov (United States)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  5. Knowledge-based low-level image analysis for computer vision systems

    Science.gov (United States)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  6. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  7. Application of computer-assisted imaging technology in human musculoskeletal joint research

    Directory of Open Access Journals (Sweden)

    Xudong Liu

    2014-01-01

    Full Text Available Computer-assisted imaging analysis technology has been widely used in the musculoskeletal joint biomechanics research in recent years. Imaging techniques can accurately reconstruct the anatomic features of the target joint and reproduce its in vivo motion characters. The data has greatly improved our understanding of normal joint function, joint injury mechanism, and surgical treatment, and can provide foundations for using reverse-engineering methods to develop biomimetic artificial joints. In this paper, we systematically reviewed the investigation of in vivo kinematics of the human knee, shoulder, lumber spine, and ankle using advanced imaging technologies, especially those using a dual fluoroscopic imaging system (DFIS. We also briefly discuss future development of imaging analysis technology in musculoskeletal joint research.

  8. Sensory analysis for magnetic resonance-image analysis: Using human perception and cognition to segment and assess the interior of potatoes

    DEFF Research Database (Denmark)

    Martens, Harald; Thybo, A.K.; Andersen, H.J.

    2002-01-01

    were developed by the panel during preliminary training sessions, and consisted in definitions of various biological compartments inside the tubers. The results from the sensory and the computer-assisted image analyses of the shape and interior structure of the tubers were related to the experimental...... able to detect differences between varieties as well as storage times. The sensory image analysis gave better discrimination between varieties than the computer-assisted image analysis presently employed, and was easier to interpret. Some sensory descriptors could be predicted from the computer......-assisted image analysis. The present results offer new information about using sensory analysis of MR-images not only for food science but also for medical applications for analysing MR and X-ray images and for training of personnel, such as radiologists and radiographers. (C) 2002 Elsevier Science Ltd....

  9. Incorporating Colour Information for Computer-Aided Diagnosis of Melanoma from Dermoscopy Images: A Retrospective Survey and Critical Analysis

    Directory of Open Access Journals (Sweden)

    Ali Madooei

    2016-01-01

    Full Text Available Cutaneous melanoma is the most life-threatening form of skin cancer. Although advanced melanoma is often considered as incurable, if detected and excised early, the prognosis is promising. Today, clinicians use computer vision in an increasing number of applications to aid early detection of melanoma through dermatological image analysis (dermoscopy images, in particular. Colour assessment is essential for the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from using low-level colour features, such as simple statistical measures of colours occurring in the lesion, to availing themselves of high-level semantic features such as the presence of blue-white veil, globules, or colour variegation in the lesion. This paper provides a retrospective survey and critical analysis of contributions in this research direction.

  10. Radiomic features analysis in computed tomography images of lung nodule classification.

    Directory of Open Access Journals (Sweden)

    Chia-Hung Chen

    Full Text Available Radiomics, which extract large amount of quantification image features from diagnostic medical images had been widely used for prognostication, treatment response prediction and cancer detection. The treatment options for lung nodules depend on their diagnosis, benign or malignant. Conventionally, lung nodule diagnosis is based on invasive biopsy. Recently, radiomics features, a non-invasive method based on clinical images, have shown high potential in lesion classification, treatment outcome prediction.Lung nodule classification using radiomics based on Computed Tomography (CT image data was investigated and a 4-feature signature was introduced for lung nodule classification. Retrospectively, 72 patients with 75 pulmonary nodules were collected. Radiomics feature extraction was performed on non-enhanced CT images with contours which were delineated by an experienced radiation oncologist.Among the 750 image features in each case, 76 features were found to have significant differences between benign and malignant lesions. A radiomics signature was composed of the best 4 features which included Laws_LSL_min, Laws_SLL_energy, Laws_SSL_skewness and Laws_EEL_uniformity. The accuracy using the signature in benign or malignant classification was 84% with the sensitivity of 92.85% and the specificity of 72.73%.The classification signature based on radiomics features demonstrated very good accuracy and high potential in clinical application.

  11. Computational model of lightness perception in high dynamic range imaging

    Science.gov (United States)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  12. Breast Cancer Image Segmentation Using K-Means Clustering Based on GPU Cuda Parallel Computing

    Directory of Open Access Journals (Sweden)

    Andika Elok Amalia

    2018-02-01

    Full Text Available Image processing technology is now widely used in the health area, one example is to help the radiologist to analyze the result of MRI (Magnetic Resonance Imaging, CT Scan and Mammography. Image segmentation is a process which is intended to obtain the objects contained in the image by dividing the image into several areas that have similarity attributes on an object with the aim of facilitating the analysis process. The increasing amount  of patient data and larger image size are new challenges in segmentation process to use time efficiently while still keeping the process quality. Research on the segmentation of medical images have been done but still few that combine with parallel computing. In this research, K-Means clustering on the image of mammography result is implemented using two-way computation which are serial and parallel. The result shows that parallel computing  gives faster average performance execution up to twofold.

  13. Computer processing of microscopic images of bacteria : morphometry and fluorimetry

    NARCIS (Netherlands)

    Wilkinson, Michael H.F.; Jansen, Gijsbert J.; Waaij, Dirk van der

    1994-01-01

    Several techniques that use computer analysis of microscopic images have been developed to study the complicated microbial flora in the human intestine, including measuring the shape and fluorescence intensity of bacteria. These techniques allow rapid assessment of changes in the intestinal flora

  14. A numerical analysis of a semi-dry coupling configuration in photoacoustic computed tomography for infant brain imaging

    Directory of Open Access Journals (Sweden)

    Najme Meimani

    2017-09-01

    Full Text Available In the application of photoacoustic human infant brain imaging, debubbled ultrasound gel or water is commonly used as a couplant for ultrasonic transducers due to their acoustic properties. The main challenge in using such a couplant is its discomfort for the patient. In this study, we explore the feasibility of a semi-dry coupling configuration to be used in photoacoustic computed tomography (PACT systems. The coupling system includes an inflatable container consisting of a thin layer of Aqualene with ultrasound gel or water inside of it. Finite element method (FEM is used for static and dynamic structural analysis of the proposed configuration to be used in PACT for infant brain imaging. The outcome of the analysis is an optimum thickness of Aqualene in order to meet the weight tolerance requirement with the least attenuation and best impedance match to recommend for an experimental setting.

  15. Particle Image Velocimetry and Computational Fluid Dynamics Analysis of Fuel Cell Manifold

    DEFF Research Database (Denmark)

    Lebæk, Jesper; Blazniak Andreasen, Marcin; Andresen, Henrik Assenholm

    2010-01-01

    The inlet effect on the manifold flow in a fuel cell stack was investigated by means of numerical methods (computational fluid dynamics) and experimental methods (particle image velocimetry). At a simulated high current density situation the flow field was mapped on a 70 cell simulated cathode...

  16. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  17. Imaging in hematology. Part 2: Computed tomography, magnetic resonance imaging and nuclear imaging

    International Nuclear Information System (INIS)

    Zhechev, Y.

    2003-01-01

    A dramatic increase of the role of imaging in diagnosis of blood diseases occurred with the development of computed tomography (CT) and magnetic resonance imaging (MRI). At present CT of the chest, abdomen, and pelvis is routinely employed in diagnostic and staging evaluation. The bone marrow may be imaged by one of several methods, including scintigraphy, CT and MRI. Nuclear imaging at diagnosis can clarify findings of uncertain significance on conventional staging and may be very useful in the setting of large masses to follow responses to therapy nad to evaluate the residual tumor in a large mass that has responded to treatment. Recent developments such as helical CT, single proton emission computed tomography (SPECT) and positron-emission tomography (PET) have continued to advance diagnosis and therapy

  18. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  19. Computerized nipple identification for multiple image analysis in computer-aided diagnosis

    International Nuclear Information System (INIS)

    Zhou Chuan; Chan Heangping; Paramagul, Chintana; Roubidoux, Marilyn A.; Sahiner, Berkman; Hadjiiski, Labomir M.; Petrick, Nicholas

    2004-01-01

    Correlation of information from multiple-view mammograms (e.g., MLO and CC views, bilateral views, or current and prior mammograms) can improve the performance of breast cancer diagnosis by radiologists or by computer. The nipple is a reliable and stable landmark on mammograms for the registration of multiple mammograms. However, accurate identification of nipple location on mammograms is challenging because of the variations in image quality and in the nipple projections, resulting in some nipples being nearly invisible on the mammograms. In this study, we developed a computerized method to automatically identify the nipple location on digitized mammograms. First, the breast boundary was obtained using a gradient-based boundary tracking algorithm, and then the gray level profiles along the inside and outside of the boundary were identified. A geometric convergence analysis was used to limit the nipple search to a region of the breast boundary. A two-stage nipple detection method was developed to identify the nipple location using the gray level information around the nipple, the geometric characteristics of nipple shapes, and the texture features of glandular tissue or ducts which converge toward the nipple. At the first stage, a rule-based method was designed to identify the nipple location by detecting significant changes of intensity along the gray level profiles inside and outside the breast boundary and the changes in the boundary direction. At the second stage, a texture orientation-field analysis was developed to estimate the nipple location based on the convergence of the texture pattern of glandular tissue or ducts towards the nipple. The nipple location was finally determined from the detected nipple candidates by a rule-based confidence analysis. In this study, 377 and 367 randomly selected digitized mammograms were used for training and testing the nipple detection algorithm, respectively. Two experienced radiologists identified the nipple locations

  20. Computational anatomy based on whole body imaging basic principles of computer-assisted diagnosis and therapy

    CERN Document Server

    Masutani, Yoshitaka

    2017-01-01

    This book deals with computational anatomy, an emerging discipline recognized in medical science as a derivative of conventional anatomy. It is also a completely new research area on the boundaries of several sciences and technologies, such as medical imaging, computer vision, and applied mathematics. Computational Anatomy Based on Whole Body Imaging highlights the underlying principles, basic theories, and fundamental techniques in computational anatomy, which are derived from conventional anatomy, medical imaging, computer vision, and applied mathematics, in addition to various examples of applications in clinical data. The book will cover topics on the basics and applications of the new discipline. Drawing from areas in multidisciplinary fields, it provides comprehensive, integrated coverage of innovative approaches to computational anatomy. As well,Computational Anatomy Based on Whole Body Imaging serves as a valuable resource for researchers including graduate students in the field and a connection with ...

  1. A specialized plug-in software module for computer-aided quantitative measurement of medical images.

    Science.gov (United States)

    Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H

    2003-12-01

    This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.

  2. Sparse Image Reconstruction in Computed Tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer

    In recent years, increased focus on the potentially harmful effects of x-ray computed tomography (CT) scans, such as radiation-induced cancer, has motivated research on new low-dose imaging techniques. Sparse image reconstruction methods, as studied for instance in the field of compressed sensing...... applications. This thesis takes a systematic approach toward establishing quantitative understanding of conditions for sparse reconstruction to work well in CT. A general framework for analyzing sparse reconstruction methods in CT is introduced and two sets of computational tools are proposed: 1...... contributions to a general set of computational characterization tools. Thus, the thesis contributions help advance sparse reconstruction methods toward routine use in...

  3. Three dimensional analysis of coelacanth body structure by computer graphics and X-ray CT images

    International Nuclear Information System (INIS)

    Suzuki, Naoki; Hamada, Takashi.

    1990-01-01

    Three dimensional imaging processes were applied for the structural and functional analyses of the modern coelacanth (Latimeria chalumnae). Visualization of the obtained images is performed with computer graphics on the basis of serial images by an X-ray CT scanning method. Reconstruction of three dimensional images of the body structure of coelacanth using the volume rendering and surface rendering methods provides us various information about external and internal shapes of this exquisite fish. (author)

  4. Imaging of the hip joint. Computed tomography versus magnetic resonance imaging

    Science.gov (United States)

    Lang, P.; Genant, H. K.; Jergesen, H. E.; Murray, W. R.

    1992-01-01

    The authors reviewed the applications and limitations of computed tomography (CT) and magnetic resonance (MR) imaging in the assessment of the most common hip disorders. Magnetic resonance imaging is the most sensitive technique in detecting osteonecrosis of the femoral head. Magnetic resonance reflects the histologic changes associated with osteonecrosis very well, which may ultimately help to improve staging. Computed tomography can more accurately identify subchondral fractures than MR imaging and thus remains important for staging. In congenital dysplasia of the hip, the position of the nonossified femoral head in children less than six months of age can only be inferred by indirect signs on CT. Magnetic resonance imaging demonstrates the cartilaginous femoral head directly without ionizing radiation. Computed tomography remains the imaging modality of choice for evaluating fractures of the hip joint. In some patients, MR imaging demonstrates the fracture even when it is not apparent on radiography. In neoplasm, CT provides better assessment of calcification, ossification, and periosteal reaction than MR imaging. Magnetic resonance imaging, however, represents the most accurate imaging modality for evaluating intramedullary and soft-tissue extent of the tumor and identifying involvement of neurovascular bundles. Magnetic resonance imaging can also be used to monitor response to chemotherapy. In osteoarthrosis and rheumatoid arthritis of the hip, both CT and MR provide more detailed assessment of the severity of disease than conventional radiography because of their tomographic nature. Magnetic resonance imaging is unique in evaluating cartilage degeneration and loss, and in demonstrating soft-tissue alterations such as inflammatory synovial proliferation.

  5. Computer vision applications for coronagraphic optical alignment and image processing.

    Science.gov (United States)

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  6. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    Directory of Open Access Journals (Sweden)

    Khalid M. Hosny

    2012-01-01

    Full Text Available An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add more complexity reduction. The comparison with existing methods was performed, where the numerical experiments and the complexity analysis ensured the efficiency of the proposed method especially with image and objects of large sizes.

  7. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  8. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  9. Size and asymmetry of the planum temporale. A new three-dimensional method for analysis of the supratemporal plane using MR imaging and computer-aided graphics

    International Nuclear Information System (INIS)

    Utsunomiya, H.; Nawata, M.; Ogasawara, T.; Okazaki, M.; Miyoshi, M.

    1996-01-01

    The planum temporale of the supratemporal plane is important for language function and shows left-right asymmetry in most brains. To estimate the size and allow side comparison of the planum temporale, we developed a new technique for 3-D MR analysis of the supratemporal plane using a personal computer and computer-aided graphics. The temporal lobes of 5 human cadavers were imaged by MR in the sagittal plane, at a slice thickness of 3 mm. The images of the supratemporal plane were entered into a personal computer using the original software to determine the positions of anatomic landmarks and the size of the planum temporale. The data were then transferred to a supercomputer to reconstruct the 3-D surface image of the supratemporal plane. Computer images of the spuratemporal plane agreed with macroscopic observations. The positions of anatomic landmarks and the size of the planum temporale also agreed with macroscopic measurements. Thus, the persent technique provides valuable anatomic data on the spuratemporal plane which should be useful for further clarification of the anatomic basis of language function. (orig.)

  10. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  11. Prospective Electrocardiogram-Gated Multidetector Row Computed Tomography Coronary Angiography. Analysis of Quality Image and Radiation Dose

    International Nuclear Information System (INIS)

    Carrascosa, P.; Capunay, C.; Deviggiano, A.; Tajer, C.D.; Vallejos, J.; Goldsmit, A.; Garcia, M.J.

    2009-01-01

    Multidetector row computed tomography coronary angiography (MDCT-CA) has become a useful diagnostic tool for the direct quantification of coronary stenosis, for identifying coronary anomalies and for the assessment of coronary artery bypass grafts. Despite its clinical value has been questioned due to the effective radiation dose (ERD) received by each patient, radiation exposure is similar to other studies. However, different strategies are permanently tested in order to reduce the ERD maintaining adequate and diagnostic image quality. Objectives: To determine the image quality and effective radiation dose (ERD) of prospective electrocardiogram-gated multidetector row computed tomography coronary angiography (PMDCTCA) (the x-ray beam is turned on for only a short portion of diastole) compared to retrospective ECG gating (RMDCTCA) (the x-ray beam is turned on throughout the cardiac cycle) and a preliminary approach of its diagnostic accuracy compared to digital invasive coronary angiography (CA). Material and Methods: Fifty consecutive patients with suspected coronary artery disease and sinus rhythm were evaluated with PMDCT-CA and compared to a control group who underwent RMDCTCA. Image quality was analyzed by two reviewers. Interobserver concordance and ERD were determined. The diagnostic accuracy of PMDCT-CA compared to CA to detect coronary artery stenosis > 50% was assessed in 30 patients. Results: There were no significant differences in the image quality between both groups. Agreement between the reviewers for segment image quality scores was k = 0.92. Mean ERD was 3.5 mSv for PMDCT-CA compared to 9.7 and 12.9 mSv for RMDCT-CA with and without tube current modulation, respectively. Individual analysis including all segments showed that the sensitivity, specificity, positive predictive value and negative predictive value of PMDCT-CA for the detection of coronary stenosis were 94.74%, 81.82%, 90% and 90%, respectively. Conclusion: Our initial experience

  12. Constructing a Computer Model of the Human Eye Based on Tissue Slice Images

    OpenAIRE

    Dai, Peishan; Wang, Boliang; Bao, Chunbo; Ju, Ying

    2010-01-01

    Computer simulation of the biomechanical and biological heat transfer in ophthalmology greatly relies on having a reliable computer model of the human eye. This paper proposes a novel method on the construction of a geometric model of the human eye based on tissue slice images. Slice images were obtained from an in vitro Chinese human eye through an embryo specimen processing methods. A level set algorithm was used to extract contour points of eye tissues while a principle component analysi...

  13. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  14. Classification of dried vegetables using computer image analysis and artificial neural networks

    Science.gov (United States)

    Koszela, K.; Łukomski, M.; Mueller, W.; Górna, K.; Okoń, P.; Boniecki, P.; Zaborowicz, M.; Wojcieszak, D.

    2017-07-01

    In the recent years, there has been a continuously increasing demand for vegetables and dried vegetables. This trend affects the growth of the dehydration industry in Poland helping to exploit excess production. More and more often dried vegetables are used in various sectors of the food industry, both due to their high nutritional qualities and changes in consumers' food preferences. As we observe an increase in consumer awareness regarding a healthy lifestyle and a boom in health food, there is also an increase in the consumption of such food, which means that the production and crop area can increase further. Among the dried vegetables, dried carrots play a strategic role due to their wide application range and high nutritional value. They contain high concentrations of carotene and sugar which is present in the form of crystals. Carrots are also the vegetables which are most often subjected to a wide range of dehydration processes; this makes it difficult to perform a reliable qualitative assessment and classification of this dried product. The many qualitative properties of dried carrots determining their positive or negative quality assessment include colour and shape. The aim of the research was to develop and implement the model of a computer system for the recognition and classification of freeze-dried, convection-dried and microwave vacuum dried products using the methods of computer image analysis and artificial neural networks.

  15. Computer-Aided Diagnosis Systems for Brain Diseases in Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Yasuo Yamashita

    2009-07-01

    Full Text Available This paper reviews the basics and recent researches of computer-aided diagnosis (CAD systems for assisting neuroradiologists in detection of brain diseases, e.g., asymptomatic unruptured aneurysms, Alzheimer's disease, vascular dementia, and multiple sclerosis (MS, in magnetic resonance (MR images. The CAD systems consist of image feature extraction based on image processing techniques and machine learning classifiers such as linear discriminant analysis, artificial neural networks, and support vector machines. We introduce useful examples of the CAD systems in the neuroradiology, and conclude with possibilities in the future of the CAD systems for brain diseases in MR images.

  16. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-01-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or hyperspectral'' imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne's Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image texture spectra'' derived from fractal signatures computed for subimage tiles at each wavelength.

  17. Computer-aided global breast MR image feature analysis for prediction of tumor response to chemotherapy: performance assessment

    Science.gov (United States)

    Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel

    2016-03-01

    Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.

  18. Image Visual Realism: From Human Perception to Machine Computation.

    Science.gov (United States)

    Fan, Shaojing; Ng, Tian-Tsong; Koenig, Bryan L; Herberg, Jonathan S; Jiang, Ming; Shen, Zhiqi; Zhao, Qi

    2017-08-30

    Visual realism is defined as the extent to which an image appears to people as a photo rather than computer generated. Assessing visual realism is important in applications like computer graphics rendering and photo retouching. However, current realism evaluation approaches use either labor-intensive human judgments or automated algorithms largely dependent on comparing renderings to reference images. We develop a reference-free computational framework for visual realism prediction to overcome these constraints. First, we construct a benchmark dataset of 2520 images with comprehensive human annotated attributes. From statistical modeling on this data, we identify image attributes most relevant for visual realism. We propose both empirically-based (guided by our statistical modeling of human data) and CNN-learned features to predict visual realism of images. Our framework has the following advantages: (1) it creates an interpretable and concise empirical model that characterizes human perception of visual realism; (2) it links computational features to latent factors of human image perception.

  19. Patient Dose From Megavoltage Computed Tomography Imaging

    International Nuclear Information System (INIS)

    Shah, Amish P.; Langen, Katja M.; Ruchala, Kenneth J.; Cox, Andrea; Kupelian, Patrick A.; Meeks, Sanford L.

    2008-01-01

    Purpose: Megavoltage computed tomography (MVCT) can be used daily for imaging with a helical tomotherapy unit for patient alignment before treatment delivery. The purpose of this investigation was to show that the MVCT dose can be computed in phantoms, and further, that the dose can be reported for actual patients from MVCT on a helical tomotherapy unit. Methods and Materials: An MVCT beam model was commissioned and verified through a series of absorbed dose measurements in phantoms. This model was then used to retrospectively calculate the imaging doses to the patients. The MVCT dose was computed for five clinical cases: prostate, breast, head/neck, lung, and craniospinal axis. Results: Validation measurements in phantoms verified that the computed dose can be reported to within 5% of the measured dose delivered at the helical tomotherapy unit. The imaging dose scaled inversely with changes to the CT pitch. Relative to a normal pitch of 2.0, the organ dose can be scaled by 0.67 and 2.0 for scans done with a pitch of 3.0 and 1.0, respectively. Typical doses were in the range of 1.0-2.0 cGy, if imaged with a normal pitch. The maximal organ dose calculated was 3.6 cGy in the neck region of the craniospinal patient, if imaged with a pitch of 1.0. Conclusion: Calculation of the MVCT dose has shown that the typical imaging dose is approximately 1.5 cGy per image. The uniform MVCT dose delivered using helical tomotherapy is greatest when the anatomic thickness is the smallest and the pitch is set to the lowest value

  20. Three dimensional reconstruction of computed tomographic images by computer graphics method

    International Nuclear Information System (INIS)

    Kashiwagi, Toru; Kimura, Kazufumi.

    1986-01-01

    A three dimensional computer reconstruction system for CT images has been developed in a commonly used radionuclide data processing system using a computer graphics technique. The three dimensional model was constructed from organ surface information of CT images (slice thickness: 5 or 10 mm). Surface contours of the organs were extracted manually from a set of parallel transverse CT slices in serial order and stored in the computer memory. Interpolation was made between a set of the extracted contours by cubic spline functions, then three dimensional models were reconstructed. The three dimensional images were displayed as a wire-frame and/or solid models on the color CRT. Solid model images were obtained as follows. The organ surface constructed from contours was divided into many triangular patches. The intensity of light to each patch was calculated from the direction of incident light, eye position and the normal to the triangular patch. Firstly, this system was applied to the liver phantom. Reconstructed images of the liver phantom were coincident with the actual object. This system also has been applied to human various organs such as brain, lung, liver, etc. The anatomical organ surface was realistically viewed from any direction. The images made us more easily understand the location and configuration of organs in vivo than original CT images. Furthermore, spacial relationship among organs and/or lesions was clearly obtained by superimposition of wire-frame and/or different colored solid models. Therefore, it is expected that this system is clinically useful for evaluating the patho-morphological changes in broad perspective. (author)

  1. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Science.gov (United States)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  2. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    International Nuclear Information System (INIS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-01-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis

  3. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Stoitsis, John [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)]. E-mail: stoitsis@biosim.ntua.gr; Valavanis, Ioannis [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Mougiakakou, Stavroula G. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Golemati, Spyretta [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Nikita, Alexandra [University of Athens, Medical School 152 28 Athens (Greece); Nikita, Konstantina S. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)

    2006-12-20

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  4. Automated parasite faecal egg counting using fluorescence labelling, smartphone image capture and computational image analysis.

    Science.gov (United States)

    Slusarewicz, Paul; Pagano, Stefanie; Mills, Christopher; Popa, Gabriel; Chow, K Martin; Mendenhall, Michael; Rodgers, David W; Nielsen, Martin K

    2016-07-01

    Intestinal parasites are a concern in veterinary medicine worldwide and for human health in the developing world. Infections are identified by microscopic visualisation of parasite eggs in faeces, which is time-consuming, requires technical expertise and is impractical for use on-site. For these reasons, recommendations for parasite surveillance are not widely adopted and parasite control is based on administration of rote prophylactic treatments with anthelmintic drugs. This approach is known to promote anthelmintic resistance, so there is a pronounced need for a convenient egg counting assay to promote good clinical practice. Using a fluorescent chitin-binding protein, we show that this structural carbohydrate is present and accessible in shells of ova of strongyle, ascarid, trichurid and coccidian parasites. Furthermore, we show that a cellular smartphone can be used as an inexpensive device to image fluorescent eggs and, by harnessing the computational power of the phone, to perform image analysis to count the eggs. Strongyle egg counts generated by the smartphone system had a significant linear correlation with manual McMaster counts (R(2)=0.98), but with a significantly lower coefficient of variation (P=0.0177). Furthermore, the system was capable of differentiating equine strongyle and ascarid eggs similar to the McMaster method, but with significantly lower coefficients of variation (P<0.0001). This demonstrates the feasibility of a simple, automated on-site test to detect and/or enumerate parasite eggs in mammalian faeces without the need for a laboratory microscope, and highlights the potential of smartphones as relatively sophisticated, inexpensive and portable medical diagnostic devices. Copyright © 2016 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.

  5. Computational ghost imaging using deep learning

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  6. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  7. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    Science.gov (United States)

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  8. Computer assisted visualization of digital mammography images

    International Nuclear Information System (INIS)

    Funke, M.; Breiter, N.; Grabbe, E.; Netsch, T.; Biehl, M.; Peitgen, H.O.

    1999-01-01

    Purpose: In a clinical study, the feasibility of using a mammography workstation for the display and interpretation of digital mammography images was evaluated and the results were compared with the corresponding laser film hard copies. Materials and Methods: Digital phosphorous plate radiographs of the entire breast were obtained in 30 patients using a direct magnification mammography system. The images were displayed for interpretation on the computer monitor of a dedicated mammography workstation and also presented as laser film hard copies on a film view box for comparison. The images were evaluted with respect to the image handling, the image quality and the visualization of relevant structures by 3 readers. Results: Handling and contrast of the monitor displayed images were found to be superior compared with the film hard copies. Image noise was found in some cases but did not compromise the interpretation of the monitor images. The visualization of relevant structures was equal with both modalities. Altogether, image interpretation with the mammography workstation was considered to be easy, quick and confident. Conclusions: Computer-assisted visualization and interpretation of digital mammography images using a dedicated workstation can be performed with sufficiently high diagnostic accuracy. (orig.) [de

  9. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  10. Computer-based image analysis in radiological diagnostics and image-guided therapy: 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    International Nuclear Information System (INIS)

    Beier, J.

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the software systems presented cover the majority of image processing applications necessary in radiology and were entirely developed, implemented and validated in the clinical routine of a university medical school. (orig.) [de

  11. Computer-aided detection of basal cell carcinoma through blood content analysis in dermoscopy images

    Science.gov (United States)

    Kharazmi, Pegah; Kalia, Sunil; Lui, Harvey; Wang, Z. Jane; Lee, Tim K.

    2018-02-01

    Basal cell carcinoma (BCC) is the most common type of skin cancer, which is highly damaging to the skin at its advanced stages and causes huge costs on the healthcare system. However, most types of BCC are easily curable if detected at early stage. Due to limited access to dermatologists and expert physicians, non-invasive computer-aided diagnosis is a viable option for skin cancer screening. A clinical biomarker of cancerous tumors is increased vascularization and excess blood flow. In this paper, we present a computer-aided technique to differentiate cancerous skin tumors from benign lesions based on vascular characteristics of the lesions. Dermoscopy image of the lesion is first decomposed using independent component analysis of the RGB channels to derive melanin and hemoglobin maps. A novel set of clinically inspired features and ratiometric measurements are then extracted from each map to characterize the vascular properties and blood content of the lesion. The feature set is then fed into a random forest classifier. Over a dataset of 664 skin lesions, the proposed method achieved an area under ROC curve of 0.832 in a 10-fold cross validation for differentiating basal cell carcinomas from benign lesions.

  12. Computer aided analysis of prostate histopathology images to support a refined Gleason grading system

    Science.gov (United States)

    Ren, Jian; Sadimin, Evita; Foran, David J.; Qi, Xin

    2017-02-01

    The Gleason grading system used to render prostate cancer diagnosis has recently been updated to allow more accurate grade stratification and higher prognostic discrimination when compared to the traditional grading system. In spite of progress made in trying to standardize the grading process, there still remains approximately a 30% grading discrepancy between the score rendered by general pathologists and those provided by experts while reviewing needle biopsies for Gleason pattern 3 and 4, which accounts for more than 70% of daily prostate tis- sue slides at most institutions. We propose a new computational imaging method for Gleason pattern 3 and 4 classification, which better matches the newly established prostate cancer grading system. The computer- aided analysis method includes two phases. First, the boundary of each glandular region is automatically segmented using a deep convolutional neural network. Second, color, shape and texture features are extracted from superpixels corresponding to the outer and inner glandular regions and are subsequently forwarded to a random forest classifier to give a gradient score between 3 and 4 for each delineated glandular region. The F1 score for glandular segmentation is 0.8460 and the classification accuracy is 0.83+/-0.03.

  13. Low Computational-Cost Footprint Deformities Diagnosis Sensor through Angles, Dimensions Analysis and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    J. Rodolfo Maestre-Rendon

    2017-11-01

    Full Text Available Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920 connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.

  14. Multi-Detector Computed Tomography Imaging Techniques in Arterial Injuries

    Directory of Open Access Journals (Sweden)

    Cameron Adler

    2018-04-01

    Full Text Available Cross-sectional imaging has become a critical aspect in the evaluation of arterial injuries. In particular, angiography using computed tomography (CT is the imaging of choice. A variety of techniques and options are available when evaluating for arterial injuries. Techniques involve contrast bolus, various phases of contrast enhancement, multiplanar reconstruction, volume rendering, and maximum intensity projection. After the images are rendered, a variety of features may be seen that diagnose the injury. This article provides a general overview of the techniques, important findings, and pitfalls in cross sectional imaging of arterial imaging, particularly in relation to computed tomography. In addition, the future directions of computed tomography, including a few techniques in the process of development, is also discussed.

  15. Computer-assisted assessment of the Human Epidermal Growth Factor Receptor 2 immunohistochemical assay in imaged histologic sections using a membrane isolation algorithm and quantitative analysis of positive controls

    International Nuclear Information System (INIS)

    Hall, Bonnie H; Ianosi-Irimie, Monica; Javidian, Parisa; Chen, Wenjin; Ganesan, Shridar; Foran, David J

    2008-01-01

    Breast cancers that overexpress the human epidermal growth factor receptor 2 (HER2) are eligible for effective biologically targeted therapies, such as trastuzumab. However, accurately determining HER2 overexpression, especially in immunohistochemically equivocal cases, remains a challenge. Manual analysis of HER2 expression is dependent on the assessment of membrane staining as well as comparisons with positive controls. In spite of the strides that have been made to standardize the assessment process, intra- and inter-observer discrepancies in scoring is not uncommon. In this manuscript we describe a pathologist assisted, computer-based continuous scoring approach for increasing the precision and reproducibility of assessing imaged breast tissue specimens. Computer-assisted analysis on HER2 IHC is compared with manual scoring and fluorescence in situ hybridization results on a test set of 99 digitally imaged breast cancer cases enriched with equivocally scored (2+) cases. Image features are generated based on the staining profile of the positive control tissue and pixels delineated by a newly developed Membrane Isolation Algorithm. Evaluation of results was performed using Receiver Operator Characteristic (ROC) analysis. A computer-aided diagnostic approach has been developed using a membrane isolation algorithm and quantitative use of positive immunostaining controls. By incorporating internal positive controls into feature analysis a greater Area Under the Curve (AUC) in ROC analysis was achieved than feature analysis without positive controls. Evaluation of HER2 immunostaining that utilized membrane pixels, controls, and percent area stained showed significantly greater AUC than manual scoring, and significantly less false positive rate when used to evaluate immunohistochemically equivocal cases. It has been shown that by incorporating both a membrane isolation algorithm and analysis of known positive controls a computer-assisted diagnostic algorithm was

  16. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Michallek, Florian; Dewey, Marc [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite - Universitaetsmedizin Berlin, Medical School, Department of Radiology, Berlin (Germany)

    2014-01-15

    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. (orig.)

  17. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review

    International Nuclear Information System (INIS)

    Michallek, Florian; Dewey, Marc

    2014-01-01

    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. (orig.)

  18. [A computer-aided image diagnosis and study system].

    Science.gov (United States)

    Li, Zhangyong; Xie, Zhengxiang

    2004-08-01

    The revolution in information processing, particularly the digitizing of medicine, has changed the medical study, work and management. This paper reports a method to design a system for computer-aided image diagnosis and study. Combined with some good idea of graph-text system and picture archives communicate system (PACS), the system was realized and used for "prescription through computer", "managing images" and "reading images under computer and helping the diagnosis". Also typical examples were constructed in a database and used to teach the beginners. The system was developed by the visual developing tools based on object oriented programming (OOP) and was carried into operation on the Windows 9X platform. The system possesses friendly man-machine interface.

  19. Noise simulation in cone beam CT imaging with parallel computing

    International Nuclear Information System (INIS)

    Tu, S.-J.; Shaw, Chris C; Chen, Lingyun

    2006-01-01

    We developed a computer noise simulation model for cone beam computed tomography imaging using a general purpose PC cluster. This model uses a mono-energetic x-ray approximation and allows us to investigate three primary performance components, specifically quantum noise, detector blurring and additive system noise. A parallel random number generator based on the Weyl sequence was implemented in the noise simulation and a visualization technique was accordingly developed to validate the quality of the parallel random number generator. In our computer simulation model, three-dimensional (3D) phantoms were mathematically modelled and used to create 450 analytical projections, which were then sampled into digital image data. Quantum noise was simulated and added to the analytical projection image data, which were then filtered to incorporate flat panel detector blurring. Additive system noise was generated and added to form the final projection images. The Feldkamp algorithm was implemented and used to reconstruct the 3D images of the phantoms. A 24 dual-Xeon PC cluster was used to compute the projections and reconstructed images in parallel with each CPU processing 10 projection views for a total of 450 views. Based on this computer simulation system, simulated cone beam CT images were generated for various phantoms and technique settings. Noise power spectra for the flat panel x-ray detector and reconstructed images were then computed to characterize the noise properties. As an example among the potential applications of our noise simulation model, we showed that images of low contrast objects can be produced and used for image quality evaluation

  20. Computer image analysis in caryopses quality evaluation as exemplified by malting barley

    Science.gov (United States)

    Koszela, K.; Raba, B.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Przybylak, A.; Boniecki, P.; Przybył, J.

    2015-07-01

    One of the purposes to employ modern technologies in agricultural and food industry is to increase the efficiency and automation of production processes, which helps improve productive effectiveness of business enterprises, thus making them more competitive. Nowadays, a challenge presents itself for this branch of economy, to produce agricultural and food products characterized by the best parameters in terms of quality, while maintaining optimum production and distribution costs of the processed biological material. Thus, several scientific centers seek to devise new and improved methods and technologies in this field, which will allow to meet the expectations. A new solution, under constant development, is to employ the so-called machine vision which is to replace human work in both quality and quantity evaluation processes. An indisputable advantage of employing the method is keeping the evaluation unbiased while improving its rate and, what is important, eliminating the fatigue factor of the expert. This paper elaborates on the topic of quality evaluation by marking the contamination in malting barley grains using computer image analysis and selected methods of artificial intelligence [4-5].

  1. FDTD parallel computational analysis of grid-type scattering filter characteristics for medical X-ray image diagnosis

    International Nuclear Information System (INIS)

    Takahashi, Koichi; Miyazaki, Yasumitsu; Goto, Nobuo

    2007-01-01

    X-ray diagnosis depends on the intensity of transmitted and scattered waves in X-ray propagation in biomedical media. X-ray is scattered and absorbed by tissues, such as fat, bone and internal organs. However, image processing for medical diagnosis, based on the scattering and absorption characteristics of these tissues in X-ray spectrum is not so much studied. To obtain precise information of tissues in a living body, the accurate characteristics of scattering and absorption are required. In this paper, X-ray scattering and absorption in biomedical media are studied using 2-dimensional finite difference time domain (FDTD) method. In FDTD method, the size of analysis space is very limited by the performance of available computers. To overcome this limitation, parallel and successive FDTD method is introduced. As a result of computer simulation, the amplitude of transmitted and scattered waves are presented numerically. The fundamental filtering characteristics of grid-type filter are also shown numerically. (author)

  2. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  3. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  4. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  5. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  6. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    Directory of Open Access Journals (Sweden)

    Malia A. Gehan

    2017-12-01

    Full Text Available Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  7. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  8. Assessment of Feasibility to Use Computer Aided Texture Analysis Based Tool for Parametric Images of Suspicious Lesions in DCE-MR Mammography

    Directory of Open Access Journals (Sweden)

    Mehmet Cemil Kale

    2013-01-01

    Full Text Available Our aim was to analyze the feasibility of computer aided malignant tumor detection using the traditional texture analysis applied on two-compartment-based parameter pseudoimages of dynamic contrast-enhanced magnetic resonance (DCE-MR breast image data. A major contribution of this research will be the through-plane assessment capability. Texture analysis was performed on two-compartment-based pseudo images of DCE-MRI datasets of breast data of eight subjects. The resulting texture parameter pseudo images were inputted to a feedforward neural network classification system which uses the manual segmentations of a primary radiologist as a gold standard, and each voxel was assigned as malignant or nonmalignant. The classification results were compared with the lesions manually segmented by a second radiologist. Results show that the mean true positive fraction (TPF and false positive fraction (FPF performance of the classifier vs. primary radiologist is statistically as good as the mean TPF and FPF performance of the second radiologist vs. primary radiologist with a confidence interval of 95% using a one-sample -test with . In the experiment implemented on all of the eight subjects, all malignant tumors marked by the primary radiologist were classified to be malignant by the computer classifier. Our results have shown that neural network classification using the textural parameters for automated screening of two-compartment-based parameter pseudo images of DCE-MRI as input data can be a supportive tool for the radiologists in the preassessment stage to show the possible cancerous regions and in the postassessment stage to review the segmentations especially in analyzing complex DCE-MRI cases.

  9. Wave-equation Migration Velocity Analysis Using Plane-wave Common Image Gathers

    KAUST Repository

    Guo, Bowen

    2017-06-01

    Wave-equation migration velocity analysis (WEMVA) based on subsurface-offset, angle domain or time-lag common image gathers (CIGs) requires significant computational and memory resources because it computes higher dimensional migration images in the extended image domain. To mitigate this problem, a WEMVA method using plane-wave CIGs is presented. Plane-wave CIGs reduce the computational cost and memory storage because they are directly calculated from prestack plane-wave migration, and the number of plane waves is often much smaller than the number of shots. In the case of an inaccurate migration velocity, the moveout of plane-wave CIGs is automatically picked by a semblance analysis method, which is then linked to the migration velocity update by a connective function. Numerical tests on two synthetic datasets and a field dataset validate the efficiency and effectiveness of this method.

  10. Noise and contrast detection in computed tomography images

    International Nuclear Information System (INIS)

    Faulkner, K.; Moores, B.M.

    1984-01-01

    A discrete representation of the reconstruction process is used in an analysis of noise in computed tomography (CT) images. This model is consistent with the method of data collection in actual machines. An expression is derived which predicts the variance on the measured linear attenuation coefficient of a single pixel in an image. The dependence of the variance on various CT scanner design parameters such as pixel size, slice width, scan time, number of detectors, etc., is then described. The variation of noise with sampling area is theoretically explained. These predictions are in good agreement with a set of experimental measurements made on a range of CT scanners. The equivalent sampling aperture of the CT process is determined and the effect of the reconstruction filter on the variance of the linear attenuation coefficient is also noted, in particular, the choice and its consequences for reconstructed images and noise behaviour. The theory has been extended to include contrast detail behaviour, and these predictions compare favourably with experimental measurements. The theory predicts that image smoothing will have little effect on the contrast-detail detectability behaviour of reconstructed images. (author)

  11. Application of Image Texture Analysis for Evaluation of X-Ray Images of Fungal-Infected Maize Kernels

    DEFF Research Database (Denmark)

    Orina, Irene; Manley, Marena; Kucheryavskiy, Sergey V.

    2018-01-01

    The feasibility of image texture analysis to evaluate X-ray images of fungal-infected maize kernels was investigated. X-ray images of maize kernels infected with Fusarium verticillioides and control kernels were acquired using high-resolution X-ray micro-computed tomography. After image acquisition...... developed using partial least squares discriminant analysis (PLS-DA), and accuracies of 67 and 73% were achieved using first-order statistical features and GLCM extracted features, respectively. This work provides information on the possible application of image texture as method for analysing X-ray images......., homogeneity and contrast) were extracted from the side, front and top views of each kernel and used as inputs for principal component analysis (PCA). The first-order statistical image features gave a better separation of the control from infected kernels on day 8 post-inoculation. Classification models were...

  12. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system.

    Science.gov (United States)

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R

    2013-07-01

    The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, pchest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.

  13. Image processing. Volumetric analysis with a digital image processing system. [GAMMA]. Bildverarbeitung. Volumetrie mittels eines digitalen Bildverarbeitungssystems

    Energy Technology Data Exchange (ETDEWEB)

    Kindler, M; Radtke, F; Demel, G

    1986-01-01

    The book is arranged in seven sections, describing various applications of volumetric analysis using image processing systems, and various methods of diagnostic evaluation of images obtained by gamma scintigraphy, cardic catheterisation, and echocardiography. A dynamic ventricular phantom is explained that has been developed for checking and calibration for safe examination of patient, the phantom allowing extensive simulation of volumetric and hemodynamic conditions of the human heart: One section discusses the program development for image processing, referring to a number of different computer systems. The equipment described includes a small non-expensive PC system, as well as a standardized nuclear medical diagnostic system, and a computer system especially suited to image processing.

  14. Computational multispectral video imaging [Invited].

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  15. Breast cancer histopathology image analysis: a review.

    Science.gov (United States)

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients.

  16. Chromatic Image Analysis For Quantitative Thermal Mapping

    Science.gov (United States)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  17. Unconventional methods of imaging: computational microscopy and compact implementations

    Science.gov (United States)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  18. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  19. Cone beam computed tomography: A boon for maxillofacial imaging

    Directory of Open Access Journals (Sweden)

    Sreenivas Rao Ghali

    2017-01-01

    Full Text Available In day to day practice, the radiographic techniques used individually or in combination suffer from some inherent limits of all planar two-dimensional (2D projections such as magnification, distortion, superimposition, and misrepresentation of anatomic structures. The introduction of cone-beam computed tomography (CBCT, specifically dedicated to imaging the maxillofacial region, heralds a major shift from 2D to three-dimensional (3D approach. It provides a complete 3D view of the maxilla, mandible, teeth, and supporting structures with relatively high resolution allowing a more accurate diagnosis, treatment planning and monitoring, and analysis of outcomes than conventional 2D images, along with low radiation exposure to the patient. CBCT has opened up new vistas for the use of 3D imaging as a diagnostic and treatment planning tool in dentistry. This paper provides an overview of the imaging principles, underlying technology, dental applications, and in particular focuses on the emerging role of CBCT in dentistry.

  20. Quantum computation and analysis of Wigner and Husimi functions: toward a quantum image treatment.

    Science.gov (United States)

    Terraneo, M; Georgeot, B; Shepelyansky, D L

    2005-06-01

    We study the efficiency of quantum algorithms which aim at obtaining phase-space distribution functions of quantum systems. Wigner and Husimi functions are considered. Different quantum algorithms are envisioned to build these functions, and compared with the classical computation. Different procedures to extract more efficiently information from the final wave function of these algorithms are studied, including coarse-grained measurements, amplitude amplification, and measure of wavelet-transformed wave function. The algorithms are analyzed and numerically tested on a complex quantum system showing different behavior depending on parameters: namely, the kicked rotator. The results for the Wigner function show in particular that the use of the quantum wavelet transform gives a polynomial gain over classical computation. For the Husimi distribution, the gain is much larger than for the Wigner function and is larger with the help of amplitude amplification and wavelet transforms. We discuss the generalization of these results to the simulation of other quantum systems. We also apply the same set of techniques to the analysis of real images. The results show that the use of the quantum wavelet transform allows one to lower dramatically the number of measurements needed, but at the cost of a large loss of information.

  1. Molecular Imaging : Computer Reconstruction and Practice - Proceedings of the NATO Advanced Study Institute on Molecular Imaging from Physical Principles to Computer Reconstruction and Practice

    CERN Document Server

    Lemoigne, Yves

    2008-01-01

    This volume collects the lectures presented at the ninth ESI School held at Archamps (FR) in November 2006 and is dedicated to nuclear physics applications in molecular imaging. The lectures focus on the multiple facets of image reconstruction processing and management and illustrate the role of digital imaging in clinical practice. Medical computing and image reconstruction are introduced by analysing the underlying physics principles and their implementation, relevant quality aspects, clinical performance and recent advancements in the field. Several stages of the imaging process are specifically addressed, e.g. optimisation of data acquisition and storage, distributed computing, physiology and detector modelling, computer algorithms for image reconstruction and measurement in tomography applications, for both clinical and biomedical research applications. All topics are presented with didactical language and style, making this book an appropriate reference for students and professionals seeking a comprehen...

  2. Analysis of live cell images: Methods, tools and opportunities.

    Science.gov (United States)

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  3. Multimodal imaging analysis of single-photon emission computed tomography and magnetic resonance tomography for improving diagnosis of Parkinson's disease

    International Nuclear Information System (INIS)

    Barthel, H.; Georgi, P.; Slomka, P.; Dannenberg, C.; Kahn, T.

    2000-01-01

    Parkinson's disease (PD) is characterized by a degeneration of nigrostriated dopaminergic neurons, which can be imaged with 123 I-labeled 2β-carbomethoxy-3β-(4-iodophenyl) tropane ([ 123 I]β-CIT) and single-photon emission computed tomography (SPECT). However, the quality of the region of interest (ROI) technique used for quantitative analysis of SPECT data is compromised by limited anatomical information in the images. We investigated whether the diagnosis of PD can be improved by combining the use of SPECT images with morphological image data from magnetic resonance imaging (MRI)/computed tomography (CT). We examined 27 patients (8 men, 19 women; aged 55±13 years) with PD (Hoehn and Yahr stage 2.1±0.8) by high-resolution [ 123 I]β-CIT SPECT (185-200 MBq, Ceraspect camera). SPECT images were analyzed both by a unimodal technique (ROIs defined directly within the SPECT studies) and a multimodal technique (ROIs defined within individual MRI/CT studies and transferred to the corresponding interactively coregistered SPECT studies). [ 123 I]β-CIT binding ratios (cerebellum as reference), which were obtained for heads of caudate nuclei (CA), putamina (PU), and global striatal structures were compared with clinical parameters. Differences between contra- and ipsilateral (related to symptom dominance) striatal [ 123 I]β-CIT binding ratios proved to be larger in the multimodal ROI technique than in the unimodal approach (e.g., for PU: 1.2*** vs. 0.7**). Binding ratios obtained by the unimodal ROI technique were significantly correlated with those of the multimodal technique (e.g., for CA: y=0.97x+2.8; r=0.70; P com subscore (r=-0.49* vs. -0.32). These results show that the impact of [ 123 I]β-CIT SPECT for diagnosing PD is affected by the method used to analyze the SPECT images. The described multimodal approach, which is based on coregistration of SPECT and morphological imaging data, leads to improved determination of the degree of this dopaminergic disorder

  4. Computer-assisted stereology and automated image analysis for quantification of tumor infiltrating lymphocytes in colon cancer.

    Science.gov (United States)

    Eriksen, Ann C; Andersen, Johnnie B; Kristensson, Martin; dePont Christensen, René; Hansen, Torben F; Kjær-Frifeldt, Sanne; Sørensen, Flemming B

    2017-08-29

    Precise prognostic and predictive variables allowing improved post-operative treatment stratification are missing in patients treated for stage II colon cancer (CC). Investigation of tumor infiltrating lymphocytes (TILs) may be rewarding, but the lack of a standardized analytic technique is a major concern. Manual stereological counting is considered the gold standard, but digital pathology with image analysis is preferred due to time efficiency. The purpose of this study was to compare manual stereological estimates of TILs with automatic counts obtained by image analysis, and at the same time investigate the heterogeneity of TILs. From 43 patients treated for stage II CC in 2002 three paraffin embedded, tumor containing tissue blocks were selected one of them representing the deepest invasive tumor front. Serial sections from each of the 129 blocks were immunohistochemically stained for CD3 and CD8, and the slides were scanned. Stereological estimates of the numerical density and area fraction of TILs were obtained using the computer-assisted newCAST stereology system. For the image analysis approach an app-based algorithm was developed using Visiopharm Integrator System software. For both methods the tumor areas of interest (invasive front and central area) were manually delineated by the observer. Based on all sections, the Spearman's correlation coefficients for density estimates varied from 0.9457 to 0.9638 (p heterogeneity, intra-class correlation coefficients (ICC) for CD3+ TILs varied from 0.615 to 0.746 in the central area, and from 0.686 to 0.746 in the invasive area. ICC for CD8+ TILs varied from 0.724 to 0.775 in the central area, and from 0.746 to 0.765 in the invasive area. Exact objective and time efficient estimates of numerical densities and area fractions of CD3+ and CD8+ TILs in stage II colon cancer can be obtained by image analysis and are highly correlated to the corresponding estimates obtained by the gold standard based on stereology

  5. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  6. Computer-assisted detection of epileptiform focuses on SPECT images

    Science.gov (United States)

    Grzegorczyk, Dawid; Dunin-Wąsowicz, Dorota; Mulawka, Jan J.

    2010-09-01

    Epilepsy is a common nervous system disease often related to consciousness disturbances and muscular spasm which affects about 1% of the human population. Despite major technological advances done in medicine in the last years there was no sufficient progress towards overcoming it. Application of advanced statistical methods and computer image analysis offers the hope for accurate detection and later removal of an epileptiform focuses which are the cause of some types of epilepsy. The aim of this work was to create a computer system that would help to find and diagnose disorders of blood circulation in the brain This may be helpful for the diagnosis of the epileptic seizures onset in the brain.

  7. GANALYZER: A TOOL FOR AUTOMATIC GALAXY IMAGE ANALYSIS

    International Nuclear Information System (INIS)

    Shamir, Lior

    2011-01-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ∼10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  8. Ganalyzer: A Tool for Automatic Galaxy Image Analysis

    Science.gov (United States)

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  9. Retinal Imaging and Image Analysis

    Science.gov (United States)

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  10. Technical considerations on scanning and image analysis for amyloid PET in dementia

    International Nuclear Information System (INIS)

    Akamatsu, Go; Ohnishi, Akihito; Aita, Kazuki; Ikari, Yasuhiko; Senda, Michio; Yamamoto, Yasuji

    2017-01-01

    Brain imaging techniques, such as computed tomography (CT), magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), and positron emission tomography (PET), can provide essential and objective information for the early and differential diagnosis of dementia. Amyloid PET is especially useful to evaluate the amyloid-β pathological process as a biomarker of Alzheimer's disease. This article reviews critical points about technical considerations on the scanning and image analysis methods for amyloid PET. Each amyloid PET agent has its own proper administration instructions and recommended uptake time, scan duration, and the method of image display and interpretation. In addition, we have introduced general scanning information, including subject positioning, reconstruction parameters, and quantitative and statistical image analysis. We believe that this article could make amyloid PET a more reliable tool in clinical study and practice. (author)

  11. Technical Considerations on Scanning and Image Analysis for Amyloid PET in Dementia.

    Science.gov (United States)

    Akamatsu, Go; Ohnishi, Akihito; Aita, Kazuki; Ikari, Yasuhiko; Yamamoto, Yasuji; Senda, Michio

    2017-01-01

    Brain imaging techniques, such as computed tomography (CT), magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), and positron emission tomography (PET), can provide essential and objective information for the early and differential diagnosis of dementia. Amyloid PET is especially useful to evaluate the amyloid-β pathological process as a biomarker of Alzheimer's disease. This article reviews critical points about technical considerations on the scanning and image analysis methods for amyloid PET. Each amyloid PET agent has its own proper administration instructions and recommended uptake time, scan duration, and the method of image display and interpretation. In addition, we have introduced general scanning information, including subject positioning, reconstruction parameters, and quantitative and statistical image analysis. We believe that this article could make amyloid PET a more reliable tool in clinical study and practice.

  12. Speeding up image reconstruction in computed tomography

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Computed tomography (CT) is a technique for imaging cross-sections of an object using X-ray measurements taken from different angles. In last decades a significant progress has happened there: today advanced algorithms allow fast image reconstruction and obtaining high-quality images even with missing or dirty data, modern detectors provide high resolution without increasing radiation dose, and high-performance multi-core computing devices are there to help us solving such tasks even faster. I will start with CT basics, then briefly present existing classes of reconstruction algorithms and their differences. After that I will proceed to employing distinctive architectural features of modern multi-core devices (CPUs and GPUs) and popular program interfaces (OpenMP, MPI, CUDA, OpenCL) for developing effective parallel realizations of image reconstruction algorithms. Decreasing full reconstruction time from long hours up to minutes or even seconds has a revolutionary impact in diagnostic medicine and industria...

  13. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    Science.gov (United States)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  14. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, M; Aldoohan, S [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended under simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT

  15. Review methods for image segmentation from computed tomography images

    International Nuclear Information System (INIS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  16. The ImageJ ecosystem: An open platform for biomedical image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  17. Computer-assisted assessment of the Human Epidermal Growth Factor Receptor 2 immunohistochemical assay in imaged histologic sections using a membrane isolation algorithm and quantitative analysis of positive controls

    Directory of Open Access Journals (Sweden)

    Ianosi-Irimie Monica

    2008-06-01

    Full Text Available Abstract Background Breast cancers that overexpress the human epidermal growth factor receptor 2 (HER2 are eligible for effective biologically targeted therapies, such as trastuzumab. However, accurately determining HER2 overexpression, especially in immunohistochemically equivocal cases, remains a challenge. Manual analysis of HER2 expression is dependent on the assessment of membrane staining as well as comparisons with positive controls. In spite of the strides that have been made to standardize the assessment process, intra- and inter-observer discrepancies in scoring is not uncommon. In this manuscript we describe a pathologist assisted, computer-based continuous scoring approach for increasing the precision and reproducibility of assessing imaged breast tissue specimens. Methods Computer-assisted analysis on HER2 IHC is compared with manual scoring and fluorescence in situ hybridization results on a test set of 99 digitally imaged breast cancer cases enriched with equivocally scored (2+ cases. Image features are generated based on the staining profile of the positive control tissue and pixels delineated by a newly developed Membrane Isolation Algorithm. Evaluation of results was performed using Receiver Operator Characteristic (ROC analysis. Results A computer-aided diagnostic approach has been developed using a membrane isolation algorithm and quantitative use of positive immunostaining controls. By incorporating internal positive controls into feature analysis a greater Area Under the Curve (AUC in ROC analysis was achieved than feature analysis without positive controls. Evaluation of HER2 immunostaining that utilized membrane pixels, controls, and percent area stained showed significantly greater AUC than manual scoring, and significantly less false positive rate when used to evaluate immunohistochemically equivocal cases. Conclusion It has been shown that by incorporating both a membrane isolation algorithm and analysis of known

  18. Computing Homology Group Generators of Images Using Irregular Graph Pyramids

    OpenAIRE

    Peltier , Samuel; Ion , Adrian; Haxhimusa , Yll; Kropatsch , Walter; Damiand , Guillaume

    2007-01-01

    International audience; We introduce a method for computing homology groups and their generators of a 2D image, using a hierarchical structure i.e. irregular graph pyramid. Starting from an image, a hierarchy of the image is built, by two operations that preserve homology of each region. Instead of computing homology generators in the base where the number of entities (cells) is large, we first reduce the number of cells by a graph pyramid. Then homology generators are computed efficiently on...

  19. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  20. Quantitative analysis of bowel gas by plain abdominal radiograph combined with computer image processing

    International Nuclear Information System (INIS)

    Gao Yan; Peng Kewen; Zhang Houde; Shen Bixian; Xiao Hanxin; Cai Juan

    2003-01-01

    Objective: To establish a method for quantitative analysis of bowel gas by plain abdominal radiograph and computer graphics. Methods: Plain abdominal radiographs in supine position from 25 patients with irritable bowel syndrome (IBS) and 20 health controls were studied. A gastroenterologist and a radiologist independently conducted the following procedure on each radiograph. After the outline of bowel gas was traced by axe pen, the radiograph was digitized by a digital camera and transmitted to the computer with Histogram software. The total gas area was determined as the pixel value on images. The ratio of the bowel gas quantity to the pixel value in the region surrounded by a horizontal line tangential to the superior pubic symphysis margin, a horizontal line tangential to the tenth dorsal vertebra inferior margin, and the lateral line tangential to the right and left anteriosuperior iliac crest, was defined as the gas volume score (GVS). To examine the sequential reproducibility, a second plain abdominal radiograph was performed in 5 normal controls 1 week later, and the GVS were compared. Results: Bowel gas was easily identified on the plain abdominal radiograph. Both large and small intestine located in the selected region. Both observers could finish one radiographic measurement in less than 10 mins. The correlation coefficient between the two observers was 0.986. There was no statistical difference on GVS between the two sequential radiographs in 5 health controls. Conclusion: Quantification of bowel gas based on plain abdominal radiograph and computer is simple, rapid, and reliable

  1. Secure thin client architecture for DICOM image analysis

    Science.gov (United States)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  2. Computer-based image studies on tumor nests mathematical features of breast cancer and their clinical prognostic value.

    Science.gov (United States)

    Wang, Lin-Wei; Qu, Ai-Ping; Yuan, Jing-Ping; Chen, Chuang; Sun, Sheng-Rong; Hu, Ming-Bai; Liu, Juan; Li, Yan

    2013-01-01

    The expending and invasive features of tumor nests could reflect the malignant biological behaviors of breast invasive ductal carcinoma. Useful information on cancer invasiveness hidden within tumor nests could be extracted and analyzed by computer image processing and big data analysis. Tissue microarrays from invasive ductal carcinoma (n = 202) were first stained with cytokeratin by immunohistochemical method to clearly demarcate the tumor nests. Then an expert-aided computer analysis system was developed to study the mathematical and geometrical features of the tumor nests. Computer recognition system and imaging analysis software extracted tumor nests information, and mathematical features of tumor nests were calculated. The relationship between tumor nests mathematical parameters and patients' 5-year disease free survival was studied. There were 8 mathematical parameters extracted by expert-aided computer analysis system. Three mathematical parameters (number, circularity and total perimeter) with area under curve >0.5 and 4 mathematical parameters (average area, average perimeter, total area/total perimeter, average (area/perimeter)) with area under curve nests could be a useful parameter to predict the prognosis of early stage breast invasive ductal carcinoma.

  3. Computed and conventional chest radiography: a comparison of image quality and radiation dose

    International Nuclear Information System (INIS)

    Ramli, K.; Abdullah, B.J.J.; Ng, K-H.; Hussain, A.F.; Mahmud, R.

    2005-01-01

    The aim of this study was to compare the image quality and entrance skin dose (ESD) for film-screen and computed chest radiography. Analysis of the image quality and dose on chest radiography was carried out on a conventional X-ray unit using film-screen, storage phosphor plates and selenium drum direct chest radiography. For each receptor, ESD was measured in 60 patients using thermoluminescent dosemeters. Images were printed on 35 x 43 cm films. Image quality was assessed subjectively by evaluation of anatomic features and estimation of the image quality, following the guidelines established by the protocols of the Commission of the European Communities. There was no statistically significant difference noted between the computed and conventional images (Wilcoxon rank sum test, P>0.05). Imaging of the mediastinum and peripheral lung structures were better visualized with the storage phosphor and selenium drum technique than with the film-screen combination. The patients' mean ESD for chest radiography using the storage phosphor, film-screen combination and selenium drum was 0.20, 0.20 and 0.25 mGy, respectively, with no statistically significant difference with P > 0.05 (χ 2 tests) Copyright (2005) Blackwell Publishing Asia Pty Ltd

  4. Synchrotron Imaging Computations on the Grid without the Computing Element

    International Nuclear Information System (INIS)

    Curri, A; Pugliese, R; Borghes, R; Kourousias, G

    2011-01-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  5. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  6. Hepatocellular adenoma: findings at state-of-the-art magnetic resonance imaging, ultrasound, computed tomography and pathologic analysis

    International Nuclear Information System (INIS)

    Hussain, Shahid M.; Bos, Indra C. van den; Dwarkasing, Roy S.; Kuiper, Jan-Willem; Hollander, Jan den

    2006-01-01

    The purpose of this paper is to describe the most recent concepts and pertinent findings of hepatocellular adenomas, including clinical presentation, gross pathology and histology, pathogenesis and transformation into hepatocellular carcinoma (HCC), and imaging findings at ultrasound (US), computed tomography (CT), and magnetic resonance (MR) imaging. (orig.)

  7. Impulse-response analysis of planar computed tomography for nondestructive test

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dae Cheon; Kim, Seung Ho; Kim, Ho Kyung [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    There have been reported that the use of radiation imaging such as digital radiography, computed tomography (CT), and digital tomosynthesis (DTS) for the nondestructive test (NDT) widely is spreading. These methods have merits and demerits of their own, in terms of image quality and inspection speed. Therefore, image for these methods for NDT should have acceptable image quality and high speed. In this study, we quantitatively evaluate impulse responses of reconstructed images from the filtered backprojection (FBP), which are most widely used in planar computed tomography (pCT) systems. We first evaluate image performance metrics due to the contrast, depth resolution, and then we design the figure of merit including image performance and system parameters, such as tube load and reconstruction speed. The final goal of this study is the application of these methods to the nondestructive test. In order to accomplish it, further study is needed. First of all, the results of the ASF from various numbers of views. Second, the analysis of modulation transfer function, noise power spectrum, and detective quantum efficiency from various angular range and numbers of views.

  8. Preliminaries on core image analysis using fault drilling samples; Core image kaiseki kotohajime (danso kussaku core kaisekirei)

    Energy Technology Data Exchange (ETDEWEB)

    Miyazaki, T; Ito, H [Geological Survey of Japan, Tsukuba (Japan)

    1996-05-01

    This paper introduces examples of image data analysis on fault drilling samples. The paper describes the following matters: core samples used in the analysis are those obtained from wells drilled piercing the Nojima fault which has moved in the Hygoken-Nanbu Earthquake; the CORESCAN system made by DMT Corporation, Germany, used in acquiring the image data consists of a CCD camera, a light source and core rotation mechanism, and a personal computer, its resolution being about 5 pixels/mm in both axial and circumferential directions, and 24-bit full color; with respect to the opening fractures in core samples collected by using a constant azimuth coring, it was possible to derive values of the opening width, inclination angle, and travel from the image data by using a commercially available software for the personal computer; and comparison of this core image with the BHTV record and the hydrophone VSP record (travel and inclination obtained from the BHTV record agree well with those obtained from the core image). 4 refs., 4 figs.

  9. Fiji: an open-source platform for biological-image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  10. Computer analysis of the X-ray images of renal concretions

    International Nuclear Information System (INIS)

    Naumov, N.; Zozikov, B.; Yanakiev, I.; Varlev, H.; Dimitrov, I.; Baltadjiev, D.; Dimitrova, St.; Shimanov, V.; Sultanov, A.; Pazderov, R.

    1997-01-01

    An investigation aimed to assessing the possibilities of computerized analysis of renal concretions is described. The results of comparative study of digitized X-ray images of concretions and data retrieved from radio-spectral microprobe analysis are presented. The obtained data confirm the author's hypothesis set forth, claiming that it is possible to define the composition and structure of renal concretions using specially developed software (Videoexpert 2.0). Excellent results are obtained even from native X-rays where the concernment is still within the patient's body. Roentgen computerized analysis is recommended in making decision on therapeutic approach towards calculi in urological and radiographic practice. 5 refs., 5 figs

  11. Computer assisted treatments for image pattern data of laser plasma experiments

    International Nuclear Information System (INIS)

    Yaoita, Akira; Matsushima, Isao

    1987-01-01

    An image data processing system for laser-plasma experiments has been constructed. These image data are two dimensional images taken by X-ray, UV, infrared and visible light television cameras and also taken by streak cameras. They are digitized by frame memories. The digitized image data are stored in disk memories with the aid of a microcomputer. The data are processed by a host computer and stored in the files of the host computer and on magnetic tapes. In this paper, the over view of the image data processing system and some software for data handling in the host computer are reported. (author)

  12. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  13. Computationally-optimized bone mechanical modeling from high-resolution structural images.

    Directory of Open Access Journals (Sweden)

    Jeremy F Magland

    Full Text Available Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time, which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM. To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging.

  14. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  15. FAST: framework for heterogeneous medical image computing and visualization.

    Science.gov (United States)

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  16. Fractal-Based Image Analysis In Radiological Applications

    Science.gov (United States)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  17. Progress in computer aided diagnosis for medical images by information technology

    International Nuclear Information System (INIS)

    Mekada, Yoshito

    2007-01-01

    This paper describes the history, present state and future view of computer aided diagnosis (CAD) based on processing, recognition and visualization of chest and abdominal images. A primitive feature of CAD is seen as early as in 1960's for lung cancer detection. Contemporarily, advances in medical imaging by CT, MRI, single photon emission computed tomography (SPECT) and positron emission tomography (PET) in multi-dimensions require doctors to read those vast information, where necessity of CAD is evident. At present, simultaneous CAD for multi-organs and multi-diseases is in progress, the interaction between images and medical doctors is leading to developing a newer system like virtual endoscopy, objective evaluation of CAD systems is necessary for its approval to authorities like fluorescein diacetate (FDA) with use of receiver operating characteristics analysis, and thus cooperation of medical and technological fields is more and more important. In future, CAD should be responsible for individual difference and for change in disease state, usable simultaneously for time and space, more recognized of its importance by doctors, and more useful in participation to therapeutic practice. (R.T.)

  18. Computer simulation of radiographic images sharpness in several system of image record

    International Nuclear Information System (INIS)

    Silva, Marcia Aparecida; Schiable, Homero; Frere, Annie France; Marques, Paulo M.A.; Oliveira, Henrique J.Q. de; Alves, Fatima F.R.; Medeiros, Regina B.

    1996-01-01

    A method to predict the influence of the record system on radiographic images sharpness by computer simulation is studied. The method intend to previously show the image to be obtained for each type of film or screen-film combination used during the exposure

  19. Physical image quality of computed radiography in mammography system

    International Nuclear Information System (INIS)

    Norriza Mohd Isa; Muhammad Jamal Isa; Wan Muhamad Saridan Wan Hassan; Fatimah Othman

    2013-01-01

    Full-text: Mammography is a screening procedure that mostly used for early detection of breast cancer. In digital imaging system, Computed Radiography is a cost-effective technology that applied indirect conversion detector. The paper presents physical image quality parameter measurements namely modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE) of Computed Radiography in mammography system. MTF was calculated from two different orientations of slanted images of an edge test device and NNPS was estimated using flat-field image. Both images were acquired using a standard mammography beam quality. DQE was determined by applying the MTF and NNPS values into our developed software program. Both orientations have similar DQE characteristics. (author)

  20. Image analysis and machine learning in digital pathology: Challenges and opportunities.

    Science.gov (United States)

    Madabhushi, Anant; Lee, George

    2016-10-01

    With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification

  1. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    International Nuclear Information System (INIS)

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-01-01

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  2. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja [Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, Uttar Pradesh 226028 (India); Bao, Le Nguyen [Duytan University, Danang 550000 (Viet Nam); Lay-Ekuakille, Aimé [Department of Innovation Engineering, University of Salento, Lecce 73100 (Italy); Le, Dac-Nhuong, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn [Duytan University, Danang 550000 (Viet Nam); Haiphong University, Haiphong 180000 (Viet Nam)

    2016-07-15

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  3. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  4. Haemodynamic imaging of thoracic stent-grafts by computational fluid dynamics (CFD): presentation of a patient-specific method combining magnetic resonance imaging and numerical simulations.

    Science.gov (United States)

    Midulla, Marco; Moreno, Ramiro; Baali, Adil; Chau, Ming; Negre-Salvayre, Anne; Nicoud, Franck; Pruvo, Jean-Pierre; Haulon, Stephan; Rousseau, Hervé

    2012-10-01

    In the last decade, there was been increasing interest in finding imaging techniques able to provide a functional vascular imaging of the thoracic aorta. The purpose of this paper is to present an imaging method combining magnetic resonance imaging (MRI) and computational fluid dynamics (CFD) to obtain a patient-specific haemodynamic analysis of patients treated by thoracic endovascular aortic repair (TEVAR). MRI was used to obtain boundary conditions. MR angiography (MRA) was followed by cardiac-gated cine sequences which covered the whole thoracic aorta. Phase contrast imaging provided the inlet and outlet profiles. A CFD mesh generator was used to model the arterial morphology, and wall movements were imposed according to the cine imaging. CFD runs were processed using the finite volume (FV) method assuming blood as a homogeneous Newtonian fluid. Twenty patients (14 men; mean age 62.2 years) with different aortic lesions were evaluated. Four-dimensional mapping of velocity and wall shear stress were obtained, depicting different patterns of flow (laminar, turbulent, stenosis-like) and local alterations of parietal stress in-stent and along the native aorta. A computational method using a combined approach with MRI appears feasible and seems promising to provide detailed functional analysis of thoracic aorta after stent-graft implantation. • Functional vascular imaging of the thoracic aorta offers new diagnostic opportunities • CFD can model vascular haemodynamics for clinical aortic problems • Combining CFD with MRI offers patient specific method of aortic analysis • Haemodynamic analysis of stent-grafts could improve clinical management and follow-up.

  5. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    Science.gov (United States)

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  6. The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    Directory of Open Access Journals (Sweden)

    Wojtek James eGoscinski

    2014-03-01

    Full Text Available The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE is a national imaging and visualisation facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organisation (CSIRO, and the Victorian Partnership for Advanced Computing (VPAC, with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI, x-ray computer tomography (CT, electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i integrated multiple different neuroimaging analysis software components, (ii enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  7. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  8. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  9. Computer-aided analysis of CT images for the differentiation of cerebral tumors

    International Nuclear Information System (INIS)

    Michalik, M.; Michalik, S.; Bornholdt, F.

    1988-01-01

    For the integration of CT imaging into the differential diagnostics of intracranial space occupations, the selection and description of characteristics facilitating a good discrimination of serveral classes of tumors becomes a very important task. From images of 93 patients with the most frequent brain tumors the optimal set of characteristics was determined. The four most significant characteristics for the differentiation of brain tumors are 'uptake of contrast medium by the tumor', 'deliniation of the tumor contours', 'progression of the tumor' and the 'average tumor density after administration of contrast media'. Very good results were obtained for the differentiation of menigneomas with and without anaplasia and for the differentiation of meningeomas from all other tumors examined. The differentiation of the degree of malignancy for various gliomatous tumors was difficult. An accurate reclassification with the computer program was obtained for 83.4% of all tumors. (author)

  10. Medical imaging in clinical applications algorithmic and computer-based approaches

    CERN Document Server

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  11. Three-Dimensional Segmentation of the Tumor in Computed Tomographic Images of Neuroblastoma

    OpenAIRE

    Deglint, Hanford J.; Rangayyan, Rangaraj M.; Ayres, Fábio J.; Boag, Graham S.; Zuffo, Marcelo K.

    2006-01-01

    Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, and normal tissue are often intermixed. Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative ...

  12. Progress in analysis of computed tomography (CT) images of hardwood logs for defect detection

    Science.gov (United States)

    Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt

    2003-01-01

    This paper addresses the problem of automatically detecting internal defects in logs using computed tomography (CT) images. The overall purpose is to assist in breakdown optimization. Several studies have shown that the commercial value of resulting boards can be increased substantially if defect locations are known in advance, and if this information is used to make...

  13. Image processing and computer graphics in radiology. Pt. A

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  14. Image processing and computer graphics in radiology. Pt. B

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  15. Thermal Infrared Imaging-Based Computational Psychophysiology for Psychometrics.

    Science.gov (United States)

    Cardone, Daniela; Pinti, Paola; Merla, Arcangelo

    2015-01-01

    Thermal infrared imaging has been proposed as a potential system for the computational assessment of human autonomic nervous activity and psychophysiological states in a contactless and noninvasive way. Through bioheat modeling of facial thermal imagery, several vital signs can be extracted, including localized blood perfusion, cardiac pulse, breath rate, and sudomotor response, since all these parameters impact the cutaneous temperature. The obtained physiological information could then be used to draw inferences about a variety of psychophysiological or affective states, as proved by the increasing number of psychophysiological studies using thermal infrared imaging. This paper presents therefore a review of the principal achievements of thermal infrared imaging in computational physiology with regard to its capability of monitoring psychophysiological activity.

  16. Computational movement analysis

    CERN Document Server

    Laube, Patrick

    2014-01-01

    This SpringerBrief discusses the characteristics of spatiotemporal movement data, including uncertainty and scale. It investigates three core aspects of Computational Movement Analysis: Conceptual modeling of movement and movement spaces, spatiotemporal analysis methods aiming at a better understanding of movement processes (with a focus on data mining for movement patterns), and using decentralized spatial computing methods in movement analysis. The author presents Computational Movement Analysis as an interdisciplinary umbrella for analyzing movement processes with methods from a range of fi

  17. Software for 3D diagnostic image reconstruction and analysis

    International Nuclear Information System (INIS)

    Taton, G.; Rokita, E.; Sierzega, M.; Klek, S.; Kulig, J.; Urbanik, A.

    2005-01-01

    Recent advances in computer technologies have opened new frontiers in medical diagnostics. Interesting possibilities are the use of three-dimensional (3D) imaging and the combination of images from different modalities. Software prepared in our laboratories devoted to 3D image reconstruction and analysis from computed tomography and ultrasonography is presented. In developing our software it was assumed that it should be applicable in standard medical practice, i.e. it should work effectively with a PC. An additional feature is the possibility of combining 3D images from different modalities. The reconstruction and data processing can be conducted using a standard PC, so low investment costs result in the introduction of advanced and useful diagnostic possibilities. The program was tested on a PC using DICOM data from computed tomography and TIFF files obtained from a 3D ultrasound system. The results of the anthropomorphic phantom and patient data were taken into consideration. A new approach was used to achieve spatial correlation of two independently obtained 3D images. The method relies on the use of four pairs of markers within the regions under consideration. The user selects the markers manually and the computer calculates the transformations necessary for coupling the images. The main software feature is the possibility of 3D image reconstruction from a series of two-dimensional (2D) images. The reconstructed 3D image can be: (1) viewed with the most popular methods of 3D image viewing, (2) filtered and processed to improve image quality, (3) analyzed quantitatively (geometrical measurements), and (4) coupled with another, independently acquired 3D image. The reconstructed and processed 3D image can be stored at every stage of image processing. The overall software performance was good considering the relatively low costs of the hardware used and the huge data sets processed. The program can be freely used and tested (source code and program available at

  18. Optical encryption with selective computational ghost imaging

    International Nuclear Information System (INIS)

    Zafari, Mohammad; Kheradmand, Reza; Ahmadi-Kandjani, Sohrab

    2014-01-01

    Selective computational ghost imaging (SCGI) is a technique which enables the reconstruction of an N-pixel image from N measurements or less. In this paper we propose an optical encryption method based on SCGI and experimentally demonstrate that this method has much higher security under eavesdropping and unauthorized accesses compared with previous reported methods. (paper)

  19. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    Science.gov (United States)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  20. Analysis of the nebulosities near T Tauri using digital computer image processing

    International Nuclear Information System (INIS)

    Lorre, J.J.

    1975-01-01

    Direct plates of T Tauri taken with the 120-inch (3 m) and 36-inch (91 cm) Lick reflectors were digitized and analyzed using digital computer image processing techniques. Luminous emission protrusions extending to as far as 13'' from T Tauri in position angles 170degree, 210degree, and 330degree are shown. These features are variable and may contain layered structure. The complex reflection nebula west of T Tauri (NGC 1555) appears to be an illuminated portion of a much larger dark nebula whose variability is due to obscuring material near the star

  1. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  2. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  3. Efficient 2-D DCT Computation from an Image Representation Point of View

    OpenAIRE

    Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.

    2009-01-01

    A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.

  4. Precision of quantitative computed tomography texture analysis using image filtering: A phantom study for scanner variability.

    Science.gov (United States)

    Yasaka, Koichiro; Akai, Hiroyuki; Mackin, Dennis; Court, Laurence; Moros, Eduardo; Ohtomo, Kuni; Kiryu, Shigeru

    2017-05-01

    Quantitative computed tomography (CT) texture analyses for images with and without filtration are gaining attention to capture the heterogeneity of tumors. The aim of this study was to investigate how quantitative texture parameters using image filtering vary among different computed tomography (CT) scanners using a phantom developed for radiomics studies.A phantom, consisting of 10 different cartridges with various textures, was scanned under 6 different scanning protocols using four CT scanners from four different vendors. CT texture analyses were performed for both unfiltered images and filtered images (using a Laplacian of Gaussian spatial band-pass filter) featuring fine, medium, and coarse textures. Forty-five regions of interest were placed for each cartridge (x) in a specific scan image set (y), and the average of the texture values (T(x,y)) was calculated. The interquartile range (IQR) of T(x,y) among the 6 scans was calculated for a specific cartridge (IQR(x)), while the IQR of T(x,y) among the 10 cartridges was calculated for a specific scan (IQR(y)), and the median IQR(y) was then calculated for the 6 scans (as the control IQR, IQRc). The median of their quotient (IQR(x)/IQRc) among the 10 cartridges was defined as the variability index (VI).The VI was relatively small for the mean in unfiltered images (0.011) and for standard deviation (0.020-0.044) and entropy (0.040-0.044) in filtered images. Skewness and kurtosis in filtered images featuring medium and coarse textures were relatively variable across different CT scanners, with VIs of 0.638-0.692 and 0.430-0.437, respectively.Various quantitative CT texture parameters are robust and variable among different scanners, and the behavior of these parameters should be taken into consideration.

  5. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  6. Cloud computing in medical imaging.

    Science.gov (United States)

    Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R

    2013-07-01

    Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.

  7. Distribution quantification on dermoscopy images for computer-assisted diagnosis of cutaneous melanomas.

    Science.gov (United States)

    Liu, Zhao; Sun, Jiuai; Smith, Lyndon; Smith, Melvyn; Warr, Robert

    2012-05-01

    Computerised analysis on skin lesion images has been reported to be helpful in achieving objective and reproducible diagnosis of melanoma. In particular, asymmetry in shape, colour and structure reflects the irregular growth of melanin under the skin and is of great importance for diagnosing the malignancy of skin lesions. This paper proposes a novel asymmetry analysis based on a newly developed pigmentation elevation model and the global point signatures (GPSs). Specifically, the pigmentation elevation model was first constructed by computer-based analysis of dermoscopy images, for the identification of melanin and haemoglobin. Asymmetry of skin lesions was then assessed through quantifying distributions of the pigmentation elevation model using the GPSs, derived from a Laplace-Beltrami operator. This new approach allows quantifying the shape and pigmentation distributions of cutaneous lesions simultaneously. Algorithm performance was tested on 351 dermoscopy images, including 88 malignant melanomas and 263 benign naevi, employing a support vector machine (SVM) with tenfold cross-validation strategy. Competitive diagnostic results were achieved using the proposed asymmetry descriptor only, presenting 86.36 % sensitivity, 82.13 % specificity and overall 83.43 % accuracy, respectively. In addition, the proposed GPS-based asymmetry analysis enables working on dermoscopy images from different databases and is approved to be inherently robust to the external imaging variations. These advantages suggested that the proposed method has good potential for follow-up treatment.

  8. Design and applications of Computed Industrial Tomographic Imaging System (CITIS)

    International Nuclear Information System (INIS)

    Ramakrishna, G.S.; Umesh Kumar; Datta, S.S.; Rao, S.M.

    1996-01-01

    Computed tomographic imaging is an advanced technique for nondestructive testing (NDT) and examination. For the first time in India a computed aided tomography system has been indigenously developed in BARC for testing industrial components and was successfully demonstrated. The system in addition to Computed Tomography (CT) can also perform Digital Radiography (DR) to serve as a powerful tool for NDT applications. It has wider applications in the fields of nuclear, space and allied fields. The authors have developed a computed industrial tomographic imaging system with Cesium 137 gamma radiation source for nondestructive examination of engineering and industrial specimens. This presentation highlights the design and development of a prototype system and its software for image reconstruction, simulation and display. The paper also describes results obtained with several tests specimens, current development and possibility of using neutrons as well as high energy x-rays in computed tomography. (author)

  9. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  10. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  11. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  12. Computer model for harmonic ultrasound imaging.

    Science.gov (United States)

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  13. Colour vision and computer-generated images

    International Nuclear Information System (INIS)

    Ramek, Michael

    2010-01-01

    Colour vision deficiencies affect approximately 8% of the male and approximately 0.4% of the female population. In this work, it is demonstrated that computer generated images oftentimes pose unnecessary problems for colour deficient viewers. Three examples, the visualization of molecular structures, graphs of mathematical functions, and colour coded images from numerical data are used to identify problematic colour combinations: red/black, green/black, red/yellow, yellow/white, fuchsia/white, and aqua/white. Alternatives for these combinations are discussed.

  14. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis

    Directory of Open Access Journals (Sweden)

    Joshua D Webster

    2012-01-01

    Full Text Available The extent to which histopathology pattern recognition image analysis (PRIA agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression. Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1. Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  15. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis.

    Science.gov (United States)

    Webster, Joshua D; Michalowski, Aleksandra M; Dwyer, Jennifer E; Corps, Kara N; Wei, Bih-Rong; Juopperi, Tarja; Hoover, Shelley B; Simpson, R Mark

    2012-01-01

    The extent to which histopathology pattern recognition image analysis (PRIA) agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression). Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1). Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  16. Decision forests for computer vision and medical image analysis

    CERN Document Server

    Criminisi, A

    2013-01-01

    This practical and easy-to-follow text explores the theoretical underpinnings of decision forests, organizing the vast existing literature on the field within a new, general-purpose forest model. Topics and features: with a foreword by Prof. Y. Amit and Prof. D. Geman, recounting their participation in the development of decision forests; introduces a flexible decision forest model, capable of addressing a large and diverse set of image and video analysis tasks; investigates both the theoretical foundations and the practical implementation of decision forests; discusses the use of decision for

  17. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  18. Design and validation of Segment - freely available software for cardiovascular image analysis

    International Nuclear Information System (INIS)

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-01

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page (http://segment.heiberg.se). Segment

  19. A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images.

    Science.gov (United States)

    Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito

    2012-07-01

    In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; pcomputer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical

  20. MDA-image: an environment of networked desktop computers for teleradiology/pathology.

    Science.gov (United States)

    Moffitt, M E; Richli, W R; Carrasco, C H; Wallace, S; Zimmerman, S O; Ayala, A G; Benjamin, R S; Chee, S; Wood, P; Daniels, P

    1991-04-01

    MDA-Image, a project of The University of Texas M. D. Anderson Cancer Center, is an environment of networked desktop computers for teleradiology/pathology. Radiographic film is digitized with a film scanner and histopathologic slides are digitized using a red, green, and blue (RGB) video camera connected to a microscope. Digitized images are stored on a data server connected to the institution's computer communication network (Ethernet) and can be displayed from authorized desktop computers connected to Ethernet. Images are digitized for cases presented at the Bone Tumor Management Conference, a multidisciplinary conference in which treatment options are discussed among clinicians, surgeons, radiologists, pathologists, radiotherapists, and medical oncologists. These radiographic and histologic images are shown on a large screen computer monitor during the conference. They are available for later review for follow-up or representation.

  1. A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging.

    Science.gov (United States)

    Bagci, Ulas; Foster, Brent; Miller-Jaster, Kirsten; Luna, Brian; Dey, Bappaditya; Bishai, William R; Jonsson, Colleen B; Jain, Sanjay; Mollura, Daniel J

    2013-07-23

    Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers' understanding of infectious diseases. We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were

  2. Students’ needs of Computer Science: learning about image processing

    Directory of Open Access Journals (Sweden)

    Juana Marlen Tellez Reinoso

    2009-12-01

    Full Text Available To learn the treatment to image, specifically in the application Photoshop Marinates is one of the objectives in the specialty of Degree in Education, Computer Sciencie, guided to guarantee the preparation of the students as future professional, being able to reach in each citizen of our country an Integral General Culture. With that purpose a computer application is suggested, of tutorial type, entitled “Learning Treatment to Image".

  3. Neural computation of visual imaging based on Kronecker product in the primary visual cortex

    Directory of Open Access Journals (Sweden)

    Guozheng Yao

    2010-03-01

    Full Text Available Abstract Background What kind of neural computation is actually performed by the primary visual cortex and how is this represented mathematically at the system level? It is an important problem in the visual information processing, but has not been well answered. In this paper, according to our understanding of retinal organization and parallel multi-channel topographical mapping between retina and primary visual cortex V1, we divide an image into orthogonal and orderly array of image primitives (or patches, in which each patch will evoke activities of simple cells in V1. From viewpoint of information processing, this activated process, essentially, involves optimal detection and optimal matching of receptive fields of simple cells with features contained in image patches. For the reconstruction of the visual image in the visual cortex V1 based on the principle of minimum mean squares error, it is natural to use the inner product expression in neural computation, which then is transformed into matrix form. Results The inner product is carried out by using Kronecker product between patches and function architecture (or functional column in localized and oriented neural computing. Compared with Fourier Transform, the mathematical description of Kronecker product is simple and intuitive, so is the algorithm more suitable for neural computation of visual cortex V1. Results of computer simulation based on two-dimensional Gabor pyramid wavelets show that the theoretical analysis and the proposed model are reasonable. Conclusions Our results are: 1. The neural computation of the retinal image in cortex V1 can be expressed to Kronecker product operation and its matrix form, this algorithm is implemented by the inner operation between retinal image primitives and primary visual cortex's column. It has simple, efficient and robust features, which is, therefore, such a neural algorithm, which can be completed by biological vision. 2. It is more suitable

  4. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  5. Image segmentation and particles classification using texture analysis method

    Directory of Open Access Journals (Sweden)

    Mayar Aly Atteya

    Full Text Available Introduction: Ingredients of oily fish include a large amount of polyunsaturated fatty acids, which are important elements in various metabolic processes of humans, and have also been used to prevent diseases. However, in an attempt to reduce cost, recent developments are starting a replace the ingredients of fish oil with products of microalgae, that also produce polyunsaturated fatty acids. To do so, it is important to closely monitor morphological changes in algae cells and monitor their age in order to achieve the best results. This paper aims to describe an advanced vision-based system to automatically detect, classify, and track the organic cells using a recently developed SOPAT-System (Smart On-line Particle Analysis Technology, a photo-optical image acquisition device combined with innovative image analysis software. Methods The proposed method includes image de-noising, binarization and Enhancement, as well as object recognition, localization and classification based on the analysis of particles’ size and texture. Results The methods allowed for correctly computing cell’s size for each particle separately. By computing an area histogram for the input images (1h, 18h, and 42h, the variation could be observed showing a clear increase in cell. Conclusion The proposed method allows for algae particles to be correctly identified with accuracies up to 99% and classified correctly with accuracies up to 100%.

  6. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    Science.gov (United States)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  7. Computer-simulated images of icosahedral, pentagonal and decagonal clusters of atoms

    International Nuclear Information System (INIS)

    Peng JuLin; Bursill, L.A.

    1989-01-01

    The aim of this work was to assess, by computer-simulation the sensitivity of high-resolution electron microscopy (HREM) images for a set of icosahedral and decagonal clusters, containing 50-400 atoms. An experimental study of both crystalline and quasy-crystalline alloys of A1(Si)Mn is presented, in which carefully-chosen electron optical conditions were established by computer simulation then used to obtain high quality images. It was concluded that while there is a very significant degree of model sensitiveness available, direct inversion from image to structure is not at realistic possibility. A reasonable procedure would be to record experimental images of known complex icosahedral alloys, in a crystalline phase, then use the computer-simulations to identify fingerprint imaging conditions whereby certain structural elements could be identified in images of quasi-crystalline or amorphous specimens. 27 refs., 12 figs., 1 tab

  8. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  9. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  10. RATIO_TOOL - SOFTWARE FOR COMPUTING IMAGE RATIOS

    Science.gov (United States)

    Yates, G. L.

    1994-01-01

    Geological studies analyze spectral data in order to gain information on surface materials. RATIO_TOOL is an interactive program for viewing and analyzing large multispectral image data sets that have been created by an imaging spectrometer. While the standard approach to classification of multispectral data is to match the spectrum for each input pixel against a library of known mineral spectra, RATIO_TOOL uses ratios of spectral bands in order to spot significant areas of interest within a multispectral image. Each image band can be viewed iteratively, or a selected image band of the data set can be requested and displayed. When the image ratios are computed, the result is displayed as a gray scale image. At this point a histogram option helps in viewing the distribution of values. A thresholding option can then be used to segment the ratio image result into two to four classes. The segmented image is then color coded to indicate threshold classes and displayed alongside the gray scale image. RATIO_TOOL is written in C language for Sun series computers running SunOS 4.0 and later. It requires the XView toolkit and the OpenWindows window manager (version 2.0 or 3.0). The XView toolkit is distributed with Open Windows. A color monitor is also required. The standard distribution medium for RATIO_TOOL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation is included on the program media. RATIO_TOOL was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Sun, SunOS, and OpenWindows are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  11. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    Science.gov (United States)

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots

  12. Positron emission tomography: Physics, instrumentation, and image analysis

    International Nuclear Information System (INIS)

    Porenta, G.

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources. PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and userfriendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center. (author)

  13. Comparison of computer workstation with light box for detecting setup errors from portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Raghavan, Suraj; Coffey, Christopher S.; Major, Stacey A.; Muller, Keith E.

    1999-01-01

    Purpose: Observer studies were conducted to test the hypothesis that radiation oncologists using a computer workstation for portal image analysis can detect setup errors at least as accurately as when following standard clinical practice of inspecting portal films on a light box. Methods and Materials: In a controlled observer study, nine radiation oncologists used a computer workstation, called PortFolio, to detect setup errors in 40 realistic digitally reconstructed portal radiograph (DRPR) images. PortFolio is a prototype workstation for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools for image enhancement; alignment of crosshairs, field edges, and anatomic structures on reference and acquired images; measurement of distances and angles; and viewing registered images superimposed on one another. The test DRPRs contained known in-plane translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Test images used in the study were also printed on film for observers to view on a light box and interpret using standard clinical practice. The mean accuracy for error detection for each approach was measured and the results were compared using repeated measures analysis of variance (ANOVA) with the Geisser-Greenhouse test statistic. Results: The results indicate that radiation oncologists participating in this study could detect and quantify in-plane rotation and translation errors more accurately with PortFolio compared to standard clinical practice. Conclusions: Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice

  14. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  15. Optoelectronic Computer Architecture Development for Image Reconstruction

    National Research Council Canada - National Science Library

    Forber, Richard

    1996-01-01

    .... Specifically, we collaborated with UCSD and ERIM on the development of an optically augmented electronic computer for high speed inverse transform calculations to enable real time image reconstruction...

  16. MYTHS vesus reality in computed radiography image quality

    International Nuclear Information System (INIS)

    Mango, Steve; Castro, Luiz

    2009-01-01

    As NDE operation - particularly radiographic testing - ransition form analog to digital technologies such as computed radiography (CR), users are learning that there's more to digital image quality than meets the eye. In fact, there are ultiple factors that determine the final perceived image quality of a computed radiograph. Many of these factors are misunderstood, and some are touted as the ''key parameter'' or ''magic bullet'' in producing optiumum image quality, In reality, such claims are oversimplified, and are more marketing hype than reality. The truth?. Perceived image quality results form the cascaded effects of many factor - such as sharpness, system noise, spot size and pixel size, subject contrast, bit depth, radiographic technique, and so on. Many of these factors are within the control of rdiographers or designers of equipment and media. This paper will explain some of these key factors, dispel some of the myths surrounding them, and will show that qualities such as bigger, smaller, more, or less are not always better when it comes to CR image quality. (authors)

  17. X-ray image processing software for computing object size and object location coordinates from acquired optical and x-ray images

    International Nuclear Information System (INIS)

    Tiwari, Akash; Tiwari, Shyam Sunder; Tiwari, Railesha; Panday, Lokesh; Panday, Jeet; Suri, Nitin

    2004-01-01

    X-ray and Visible image data processing software has been developed in Visual Basic for real time online and offline image information processing for NDT and Medical Applications. Software computes two dimension image size parameters from its sharp boundary lines by raster scanning the image contrast data. Code accepts bit map image data and hunts for multiple tumors of different sizes that may be present in the image definition and then computes size of each tumor and locates its approximate center for registering its location coordinates. Presence of foreign metal and glass balls industrial product such as chocolate and other food items imaged out using x-ray imaging technique are detected by the software and their size and position co-ordinates are computed by the software. Paper discusses ways and means to compute size and coordinated of air bubble like objects present in the x-ray and optical images and their multiple existences in image of interest. (author)

  18. Imaging biofilm in porous media using X-ray computed microtomography.

    Science.gov (United States)

    Davit, Y; Iltis, G; Debenest, G; Veran-Tissoires, S; Wildenschild, D; Gerino, M; Quintard, M

    2011-04-01

    In this study, a new technique for three-dimensional imaging of biofilm within porous media using X-ray computed microtomography is presented. Due to the similarity in X-ray absorption coefficients for the porous media (plastic), biofilm and aqueous phase, an X-ray contrast agent is required to image biofilm within the experimental matrix using X-ray computed tomography. The presented technique utilizes a medical suspension of barium sulphate to differentiate between the aqueous phase and the biofilm. Potassium iodide is added to the suspension to aid in delineation between the biofilm and the experimental porous medium. The iodide readily diffuses into the biofilm while the barium sulphate suspension remains in the aqueous phase. This allows for effective differentiation of the three phases within the experimental systems utilized in this study. The behaviour of the two contrast agents, in particular of the barium sulphate, is addressed by comparing two-dimensional images of biofilm within a pore network obtained by (1) optical visualization and (2) X-ray absorption radiography. We show that the contrast mixture provides contrast between the biofilm, the aqueous-phase and the solid-phase (beads). The imaging method is then applied to two three-dimensional packed-bead columns within which biofilm was grown. Examples of reconstructed images are provided to illustrate the effectiveness of the method. Limitations and applications of the technique are discussed. A key benefit, associated with the presented method, is that it captures a substantial amount of information regarding the topology of the pore-scale transport processes. For example, the quantification of changes in porous media effective parameters, such as dispersion or permeability, induced by biofilm growth, is possible using specific upscaling techniques and numerical analysis. We emphasize that the results presented here serve as a first test of this novel approach; issues with accurate segmentation of

  19. Multicenter study of quantitative computed tomography analysis using a computer-aided three-dimensional system in patients with idiopathic pulmonary fibrosis.

    Science.gov (United States)

    Iwasawa, Tae; Kanauchi, Tetsu; Hoshi, Toshiko; Ogura, Takashi; Baba, Tomohisa; Gotoh, Toshiyuki; Oba, Mari S

    2016-01-01

    To evaluate the feasibility of automated quantitative analysis with a three-dimensional (3D) computer-aided system (i.e., Gaussian histogram normalized correlation, GHNC) of computed tomography (CT) images from different scanners. Each institution's review board approved the research protocol. Informed patient consent was not required. The participants in this multicenter prospective study were 80 patients (65 men, 15 women) with idiopathic pulmonary fibrosis. Their mean age was 70.6 years. Computed tomography (CT) images were obtained by four different scanners set at different exposures. We measured the extent of fibrosis using GHNC, and used Pearson's correlation analysis, Bland-Altman plots, and kappa analysis to directly compare the GHNC results with manual scoring by radiologists. Multiple linear regression analysis was performed to determine the association between the CT data and forced vital capacity (FVC). For each scanner, the extent of fibrosis as determined by GHNC was significantly correlated with the radiologists' score. In multivariate analysis, the extent of fibrosis as determined by GHNC was significantly correlated with FVC (p < 0.001). There was no significant difference between the results obtained using different CT scanners. Gaussian histogram normalized correlation was feasible, irrespective of the type of CT scanner used.

  20. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.

    Science.gov (United States)

    van Ginneken, Bram

    2017-03-01

    Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

  1. Computational analysis of cerebral cortex

    Energy Technology Data Exchange (ETDEWEB)

    Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan)

    2010-08-15

    Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)

  2. Computational analysis of cerebral cortex

    International Nuclear Information System (INIS)

    Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni

    2010-01-01

    Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)

  3. Design and applications of Computed Industrial Tomographic Imaging System (CITIS)

    Energy Technology Data Exchange (ETDEWEB)

    Ramakrishna, G S; Kumar, Umesh; Datta, S S [Bhabha Atomic Research Centre, Bombay (India). Isotope Div.

    1994-12-31

    This paper highlights the design and development of a prototype Computed Tomographic (CT) imaging system and its software for image reconstruction, simulation and display. It also describes results obtained with several test specimens including Dhruva reactor uranium fuel assembly and possibility of using neutrons as well as high energy x-rays in computed tomography. 5 refs., 4 figs.

  4. The diffractive achromat full spectrum computational imaging with diffractive optics

    KAUST Repository

    Peng, Yifan

    2016-07-11

    Diffractive optical elements (DOEs) have recently drawn great attention in computational imaging because they can drastically reduce the size and weight of imaging devices compared to their refractive counterparts. However, the inherent strong dispersion is a tremendous obstacle that limits the use of DOEs in full spectrum imaging, causing unacceptable loss of color fidelity in the images. In particular, metamerism introduces a data dependency in the image blur, which has been neglected in computational imaging methods so far. We introduce both a diffractive achromat based on computational optimization, as well as a corresponding algorithm for correction of residual aberrations. Using this approach, we demonstrate high fidelity color diffractive-only imaging over the full visible spectrum. In the optical design, the height profile of a diffractive lens is optimized to balance the focusing contributions of different wavelengths for a specific focal length. The spectral point spread functions (PSFs) become nearly identical to each other, creating approximately spectrally invariant blur kernels. This property guarantees good color preservation in the captured image and facilitates the correction of residual aberrations in our fast two-step deconvolution without additional color priors. We demonstrate our design of diffractive achromat on a 0.5mm ultrathin substrate by photolithography techniques. Experimental results show that our achromatic diffractive lens produces high color fidelity and better image quality in the full visible spectrum. © 2016 ACM.

  5. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  6. Computing a Non-trivial Lower Bound on the Joint Entropy between Two Images

    Energy Technology Data Exchange (ETDEWEB)

    Perumalla, Kalyan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-03-01

    In this report, a non-trivial lower bound on the joint entropy of two non-identical images is developed, which is greater than the individual entropies of the images. The lower bound is the least joint entropy possible among all pairs of images that have the same histograms as those of the given images. New algorithms are presented to compute the joint entropy lower bound with a computation time proportional to S log S where S is the number of histogram bins of the images. This is faster than the traditional methods of computing the exact joint entropy with a computation time that is quadratic in S .

  7. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  8. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  9. A projection graphic display for the computer aided analysis of bubble chamber images

    International Nuclear Information System (INIS)

    Solomos, E.

    1979-01-01

    A projection graphic display for aiding the analysis of bubble chamber photographs has been developed by the Instrumentation Group of EF Division at CERN. The display image is generated on a very high brightness cathode ray tube and projected on to the table of the scanning-measuring machines as a superposition to the image of the bubble chamber. The display can send messages to the operator and aid the measurement by indicating directly on the chamber image the tracks which are measured correctly or not. (orig.)

  10. Three-dimensional analysis and classification of arteries in the skin and subcutaneous adipofascial tissue by computer graphics imaging.

    Science.gov (United States)

    Nakajima, H; Minabe, T; Imanishi, N

    1998-09-01

    To develop new types of surgical flaps that utilize portions of the skin and subcutaneous tissue (e.g., a thin flap or an adipofascial flap), three-dimensional investigation of the vasculature in the skin and subcutaneous tissue has been anticipated. In the present study, total-body arterial injection and three-dimensional imaging of the arteries by computer graphics were performed. The full-thickness skin and subcutaneous adipofascial tissue samples, which were obtained from fresh human cadavers injected with radio-opaque medium, were divided into three distinct layers. Angiograms of each layer were introduced into a personal computer to construct three-dimensional images. On a computer monitor, each artery was shown color-coded according to the three portions: the deep adipofascial layer, superficial adipofascial layer, and dermis. Three-dimensional computerized images of each artery in the skin and subcutaneous tissue revealed the components of each vascular plexus and permitted their classification into six types. The distribution of types in the body correlated with the tissue mobility of each area. Clinically, appreciation of the three-dimensional structure of the arteries allowed the development of several new kinds of flaps.

  11. PyElph - a software tool for gel images analysis and phylogenetics

    Directory of Open Access Journals (Sweden)

    Pavel Ana Brânduşa

    2012-01-01

    Full Text Available Abstract Background This paper presents PyElph, a software tool which automatically extracts data from gel images, computes the molecular weights of the analyzed molecules or fragments, compares DNA patterns which result from experiments with molecular genetic markers and, also, generates phylogenetic trees computed by five clustering methods, using the information extracted from the analyzed gel image. The software can be successfully used for population genetics, phylogenetics, taxonomic studies and other applications which require gel image analysis. Researchers and students working in molecular biology and genetics would benefit greatly from the proposed software because it is free, open source, easy to use, has a friendly Graphical User Interface and does not depend on specific image acquisition devices like other commercial programs with similar functionalities do. Results PyElph software tool is entirely implemented in Python which is a very popular programming language among the bioinformatics community. It provides a very friendly Graphical User Interface which was designed in six steps that gradually lead to the results. The user is guided through the following steps: image loading and preparation, lane detection, band detection, molecular weights computation based on a molecular weight marker, band matching and finally, the computation and visualization of phylogenetic trees. A strong point of the software is the visualization component for the processed data. The Graphical User Interface provides operations for image manipulation and highlights lanes, bands and band matching in the analyzed gel image. All the data and images generated in each step can be saved. The software has been tested on several DNA patterns obtained from experiments with different genetic markers. Examples of genetic markers which can be analyzed using PyElph are RFLP (Restriction Fragment Length Polymorphism, AFLP (Amplified Fragment Length Polymorphism, RAPD

  12. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  13. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  14. Computer-assisted instruction; MR imaging of congenital heart disease

    International Nuclear Information System (INIS)

    Choi, Young Hi; Yu, Pil Mun; Lee, Sang Hoon; Choe, Yeon Hyeon; Kim, Yang Min

    1996-01-01

    To develop a software program for computer-assisted instruction on MR imaging of congenital heart disease for medical students and residents to achieve repetitive and effective self-learning. We used a film scanner(Scan Maker 35t) and IBM-PC(486 DX-2, 60 MHz) for acquisition and storage of image data. The accessories attached to the main processor were CD-ROM drive(Sony), sound card(Soundblaster-Pro), and speaker. We used software of Adobe Photoshop(v 3.0) and paint shop-pro(v 3.0) for preprocessing image data, and paintbrush from microsoft windows 3.1 for labelling. The language used for programming was visual basic(v 3.0) from microsoft corporation. We developed a software program for computer-assisted instruction on MR imaging of congenital heart disease as an effective educational tool

  15. Measurement of facial soft tissues thickness using 3D computed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Ho Gul; Kim, Kee Deog; Shin, Dong Won; Hu, Kyung Seok; Lee, Jae Bum; Park, Hyok; Park, Chang Seo [Yonsei Univ. Hospital, Seoul (Korea, Republic of); Han, Seung Ho [Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2006-03-15

    To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. There were no statistically significant differences between the direct measurements and those using the 3D images(p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissue thickness more easily in forensic science and anthropology.

  16. Measurement of facial soft tissues thickness using 3D computed tomographic images

    International Nuclear Information System (INIS)

    Jeong, Ho Gul; Kim, Kee Deog; Shin, Dong Won; Hu, Kyung Seok; Lee, Jae Bum; Park, Hyok; Park, Chang Seo; Han, Seung Ho

    2006-01-01

    To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. There were no statistically significant differences between the direct measurements and those using the 3D images(p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissue thickness more easily in forensic science and anthropology

  17. Anniversary Paper: History and status of CAD and quantitative image analysis: The role of Medical Physics and AAPM

    International Nuclear Information System (INIS)

    Giger, Maryellen L.; Chan, Heang-Ping; Boone, John

    2008-01-01

    The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists--as opposed to a completely automatic computer interpretation--focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous--from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects--collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the

  18. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  19. Insight into dynamic genome imaging: Canonical framework identification and high-throughput analysis.

    Science.gov (United States)

    Ronquist, Scott; Meixner, Walter; Rajapakse, Indika; Snyder, John

    2017-07-01

    The human genome is dynamic in structure, complicating researcher's attempts at fully understanding it. Time series "Fluorescent in situ Hybridization" (FISH) imaging has increased our ability to observe genome structure, but due to cell type and experimental variability this data is often noisy and difficult to analyze. Furthermore, computational analysis techniques are needed for homolog discrimination and canonical framework detection, in the case of time-series images. In this paper we introduce novel ideas for nucleus imaging analysis, present findings extracted using dynamic genome imaging, and propose an objective algorithm for high-throughput, time-series FISH imaging. While a canonical framework could not be detected beyond statistical significance in the analyzed dataset, a mathematical framework for detection has been outlined with extension to 3D image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Morphologic analysis of Japanese adult sacroiliac joint using computed tomographic images

    International Nuclear Information System (INIS)

    Pan, Xuanchao; Takayama, Akinori; Shibata, Yasuaki; Ito, Hiromoto

    2003-01-01

    The purpose of this study was to study the relationship of angles in adult sacroiliac joints (SJ) with laterality, age, gender, degeneration, childbearing in different locations. The study was performed in 92 healthy Japanese adult volunteers (46 males and 46 females, aged 21∼86 years) who had no low back complaints. Axial computed tomographic (CT) images were obtained using an X-VIGOR apparatus (Toshiba Medical Inc. Japan). The angle measurements were taken directly using soft National Institutes of Health (NIH) Image 1.61 (Scion Inc. USA). We examined possible factors. Statistical evaluation was calculated using t-test by soft SPSS (SPSS Inc. Japan). Our findings indicated that SJ angles had no relationships with laterality, gender. But from upper part to lower part, the average of SJ angle was 7.61 deg±8.7 deg, 5.16 deg±7.3 deg, -0.85 deg±7.3 deg respectively in the left and 6.56 deg±9.4 deg, 4.10 deg±7.2 deg, -2.30 deg±7.0 deg in the right. The difference is significant between lower part and upper-middle part (P<0.05). Our results provided new anatomic and morphological data for better understandings of SJ in the clinic work. (author)

  1. STUDY OF IMAGE SEGMENTATION TECHNIQUES ON RETINAL IMAGES FOR HEALTH CARE MANAGEMENT WITH FAST COMPUTING

    Directory of Open Access Journals (Sweden)

    Srikanth Prabhu

    2012-02-01

    Full Text Available The role of segmentation in image processing is to separate foreground from background. In this process, the features become clearly visible when appropriate filters are applied on the image. In this paper emphasis has been laid on segmentation of biometric retinal images to filter out the vessels explicitly for evaluating the bifurcation points and features for diabetic retinopathy. Segmentation on images is performed by calculating ridges or morphology. Ridges are those areas in the images where there is sharp contrast in features. Morphology targets the features using structuring elements. Structuring elements are of different shapes like disk, line which is used for extracting features of those shapes. When segmentation was performed on retinal images problems were encountered during image pre-processing stage. Also edge detection techniques have been deployed to find out the contours of the retinal images. After the segmentation has been performed, it has been seen that artifacts of the retinal images have been minimal when ridge based segmentation technique was deployed. In the field of Health Care Management, image segmentation has an important role to play as it determines whether a person is normal or having any disease specially diabetes. During the process of segmentation, diseased features are classified as diseased one’s or artifacts. The problem comes when artifacts are classified as diseased ones. This results in misclassification which has been discussed in the analysis Section. We have achieved fast computing with better performance, in terms of speed for non-repeating features, when compared to repeating features.

  2. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  3. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    Science.gov (United States)

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  4. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix; Fu, Qiang; Peng, Yifan; Heidrich, Wolfgang

    2016-01-01

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  5. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix

    2016-09-16

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  6. Image communication scheme based on dynamic visual cryptography and computer generated holography

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  7. Utility of the computed tomography indices on cone beam computed tomography images in the diagnosis of osteoporosis in women

    International Nuclear Information System (INIS)

    Koh, Kwang Joon; Kim, Kyung A

    2011-01-01

    This study evaluated the potential use of the computed tomography indices (CTI) on cone beam CT (CBCT) images for an assessment of the bone mineral density (BMD) in postmenopausal osteoporotic women. Twenty-one postmenopausal osteoporotic women and 21 postmenopausal healthy women were enrolled as the subjects. The BMD of the lumbar vertebrae and femur were calculated by dual energy X-ray absorptiometry (DXA) using a DXA scanner. The CBCT images were obtained from the unilateral mental foramen region using a PSR-9000N Dental CT system. The axial, sagittal, and coronal images were reconstructed from the block images using OnDemend3D. The new term 'CTI' on CBCT images was proposed. The relationship between the CT measurements and BMDs were assessed and the intra-observer agreement was determined. There were significant differences between the normal and osteoporotic groups in the computed tomography mandibular index superior (CTI(S)), computed tomography mandibular index inferior (CTI(I)), and computed tomography cortical index (CTCI). On the other hand, there was no difference between the groups in the computed tomography mental index (CTMI: inferior cortical width). CTI(S), CTI(I), and CTCI on the CBCT images can be used to assess the osteoporotic women.

  8. Salt distribution in dry-cured ham measured by computed tomography and image analysis

    DEFF Research Database (Denmark)

    Vestergaard, Christian Sylvest; Erbou, Søren G.; Thauland, T.

    2005-01-01

    Forty-seven hams were scanned four times by computed tomography (CT) while being manufactured into dry-cured hams. An image-processing algorithm measured CT values in the lean part of the hams and provided line profiles reflecting the magnitude and spatial location of salt gradients. At the end...... of manufacturing, seven entire hams were dissected and the salt content of the lean part determined. Likewise, in the remaining 40 hams, the lean meat of the slices corresponding to the CT images was dissected, analyzed chemically for NaCl and compared to the CT value. The salt content of entire dry-cured hams...... section were demonstrated. Line profiles illustrating the combined salt tribution and dehydration within a ham related to the physical characteristics of the ham as well as to the manufacturing process. These findings reveal that the effects of altered manufacturing practices can be followed non...

  9. Parotid lymphomas - clinical and computed tomogrphic imaging ...

    African Journals Online (AJOL)

    Parotid lymphomas - clinical and computed tomogrphic imaging features. ... South African Journal of Surgery ... Lymphoma has a clinical presentation similar ... CT scanning is a useful adjunctive investigation to determine the site and extent of ...

  10. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  11. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  12. Image analysis and machine learning for detecting malaria.

    Science.gov (United States)

    Poostchi, Mahdieh; Silamut, Kamolrat; Maude, Richard J; Jaeger, Stefan; Thoma, George

    2018-04-01

    Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. Published by Elsevier Inc.

  13. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  14. An approach for quantitative image quality analysis for CT

    Science.gov (United States)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  15. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  16. Telemetry Timing Analysis for Image Reconstruction of Kompsat Spacecraft

    Directory of Open Access Journals (Sweden)

    Jin-Ho Lee

    2000-06-01

    Full Text Available The KOMPSAT (KOrea Multi-Purpose SATellite has two optical imaging instruments called EOC (Electro-Optical Camera and OSMI (Ocean Scanning Multispectral Imager. The image data of these instruments are transmitted to ground station and restored correctly after post-processing with the telemetry data transferred from KOMPSAT spacecraft. The major timing information of the KOMPSAT is OBT (On-Board Time which is formatted by the on-board computer of the spacecraft, based on 1Hz sync. pulse coming from the GPS receiver involved. The OBT is transmitted to ground station with the house-keeping telemetry data of the spacecraft while it is distributed to the instruments via 1553B data bus for synchronization during imaging and formatting. The timing information contained in the spacecraft telemetry data would have direct relation to the image data of the instruments, which should be well explained to get a more accurate image. This paper addresses the timing analysis of the KOMPSAT spacecraft and instruments, including the gyro data timing analysis for the correct restoration of the EOC and OSMI image data at ground station.

  17. Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data

    Science.gov (United States)

    Likens, W. C.; Wrigley, R. C.

    1984-01-01

    Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.

  18. Morphometric Analysis of the Internal Auditory Canal by Computed Tomography Imaging

    International Nuclear Information System (INIS)

    Marques, Sergio Ricardo; Ajzen, Sergio; D'Ippolito, Giuseppe; Alonso, Luis; Isotani, Sadao; Lederman, Henrique

    2012-01-01

    Many clinical and experimental studies have been done to analyze the anatomical and functional aspects of the internal auditory canal (IAC) in human beings since there are great inter-individual variability and structural variations that may occur regarding the other adjacent structures. The purpose of this study was to characterize the morphology of the internal auditory canal (IAC) during development using high resolution computed tomography (CT) and to analyze its dimensions, which will be determined by measuring the nearby areas and structures using a system of digital image processing. CT images of the IAC of 110 normal subjects aged 1 to 92 years (mean age, 46.5 years) of both genders were reviewed to determine the shape, area, opening width (OW), longitudinal length (LL), vertical diameter (VD) and distance from the vestibular aqueduct. The shapes observed in children and adults were funnel-shaped (74% and 58.3%, respectively), cylindrical (22% and 30.9%, respectively) and bud-shaped (4% and 10.8%, respectively). The measurements by CT in children were: area= 50.30 mm 2 , OW = 7.53 mm, length = 11.17 mm, VD = 4.82 mm and the distance between the IAC and the vestibular aqueduct (VA) = 12.63 mm. In adults, the measurements were: area = 44.64 mm 2 , OW = 7.10 mm, length = 9.84 mm, VD = 4.47 mm and the distance between IAC and VA = 11.17 mm. CT images showed that the IAC has different shapes and when the measurements obtained for children were compared with those of adults, the parameters that presented statistically significant differences in either gender were length and diameter

  19. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    Science.gov (United States)

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  1. Survey of using GPU CUDA programming model in medical image analysis

    Directory of Open Access Journals (Sweden)

    T. Kalaiselvi

    2017-01-01

    Full Text Available With the technology development of medical industry, processing data is expanding rapidly and computation time also increases due to many factors like 3D, 4D treatment planning, the increasing sophistication of MRI pulse sequences and the growing complexity of algorithms. Graphics processing unit (GPU addresses these problems and gives the solutions for using their features such as, high computation throughput, high memory bandwidth, support for floating-point arithmetic and low cost. Compute unified device architecture (CUDA is a popular GPU programming model introduced by NVIDIA for parallel computing. This review paper briefly discusses the need of GPU CUDA computing in the medical image analysis. The GPU performances of existing algorithms are analyzed and the computational gain is discussed. A few open issues, hardware configurations and optimization principles of existing methods are discussed. This survey concludes the few optimization techniques with the medical imaging algorithms on GPU. Finally, limitation and future scope of GPU programming are discussed.

  2. Fast Virtual Fractional Flow Reserve Based Upon Steady-State Computational Fluid Dynamics Analysis

    Directory of Open Access Journals (Sweden)

    Paul D. Morris, PhD

    2017-08-01

    Full Text Available Fractional flow reserve (FFR-guided percutaneous intervention is superior to standard assessment but remains underused. The authors have developed a novel “pseudotransient” analysis protocol for computing virtual fractional flow reserve (vFFR based upon angiographic images and steady-state computational fluid dynamics. This protocol generates vFFR results in 189 s (cf >24 h for transient analysis using a desktop PC, with <1% error relative to that of full-transient computational fluid dynamics analysis. Sensitivity analysis demonstrated that physiological lesion significance was influenced less by coronary or lesion anatomy (33% and more by microvascular physiology (59%. If coronary microvascular resistance can be estimated, vFFR can be accurately computed in less time than it takes to make invasive measurements.

  3. Comparative study of ultrasound imaging, computed tomography and magnetic resonance imaging in gynecology

    International Nuclear Information System (INIS)

    Ishii, Kenji; Kobayashi, Hisaaki; Hoshihara, Takayuki; Kobayashi, Mitsunao; Suda, Yoshio; Takenaka, Eiichi; Sasa, Hidenori.

    1989-01-01

    We studied 18 patients who were operated at the National Defense Medical College Hospital and confirmed by pathological diagnosis. We compared ultrasound imaging, computed tomography (CT) and magnetic resonance imaging (MRI) of the patients. MRI was useful to diagnose enlargement of the uterine cavity and a small amount of ascites and to understand orientation of the pelvic organs. Ultrasound imaging is the most useful examination to diagnose gynecological diseases. But when it is difficult to diagnose by ultrasound imaging alone, we should employ either CT or MRI, or preferably both. (author)

  4. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    Science.gov (United States)

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  5. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  6. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    Science.gov (United States)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  7. Analysis on imaging features of mammography in computer radiography and investigation on gray scale transform and energy subtraction

    International Nuclear Information System (INIS)

    Feng Shuli

    2003-01-01

    In this dissertation, a novel transform method based on human visual response features for gray scale mammographic imaging in computer radiography (CR) is presented. The parameters for imaging quality on CR imaging for mammography were investigated experimentally. In addition, methods for image energy subtraction and a novel method of image registration for mammography of CR imaging are presented. Because the images are viewed and investigated by humans, the method of displaying differences in gray scale images is more convenient if the gray scale differences are displayed in a manner commensurate with human visual response principles. Through transformation of image gray scale with this method, the contrast of the image will be enhanced and the capability for humans to extract the useful information from the image will be increased. Tumors and microcalcifications are displayed in a form for humans to view more simply after transforming the image. The method is theoretically and experimentally investigated. Through measurement of the parameters of a geometrically blurred image, MTF, DQE, and ROC on CR imaging, and also comparison with the imaging quality of screen-film systems, the results indicate that CR imaging qualities in DQE and ROC are better than those of screen-film systems. In geometric blur of the image and MTF, the differences in image quality between CR and the screen-film system are very small. The results suggest that the CR system can replace the screen-film system for mammography imaging. In addition, the results show that the optimal imaging energy for CR mammography is about 24 kV. This condition indicates that the imaging energy of the CR system is lower than that of the screen-film system and, therefore, the x-ray dose to the patient for mammography with the CR system is lower than that with the screen-film system. Based on the difference of penetrability of x ray with different wavelength, and the fact that the part of the x-ray beam will pass

  8. Optimal steel thickness combined with computed radiography for portal imaging of nasopharyngeal cancer patients

    International Nuclear Information System (INIS)

    Wu Shixiu; Jin Xiance; Xie Congying; Cao Guoquan

    2005-01-01

    The poor image quality of conventional metal screen-film portal imaging system has long been of concern, and various methods have been investigated in an attempt to enhance the quality of portal images. Computed radiography (CR) used in combination with a steel plate displays image enhancement. The optimal thickness of the steel plate had been studied by measuring the modulation transfer function (MTF) characteristics. Portal images of nasopharyngeal carcinoma patients were taken by both a conventional metal screen-film system and this optimal steel and CR plate combination system. Compared with a conventional metal screen-film system, the CR-metal screen system achieves a much higher image contrast. The measured modulation transfer function (MTF) of the CR combination is greater than conventional film-screen portal imaging systems and also results in superior image performance, as demonstrated by receiver operator characteristic (ROC) analysis. This optimal combination steel CR plate portal imaging system is capable of producing high contrast portal images conveniently

  9. Meta-Analysis of Stress Myocardial Perfusion Imaging

    Science.gov (United States)

    2017-06-06

    Coronary Disease; Echocardiography; Fractional Flow Reserve, Myocardial; Hemodynamics; Humans; Magnetic Resonance Imaging; Myocardial Perfusion Imaging; Perfusion; Predictive Value of Tests; Single Photon Emission Computed Tomography; Positron Emission Tomography; Multidetector Computed Tomography; Echocardiography, Stress; Coronary Angiography

  10. Ultra-high performance, solid-state, autoradiographic image digitization and analysis system

    International Nuclear Information System (INIS)

    Lear, J.L.; Pratt, J.P.; Ackermann, R.F.; Plotnick, J.; Rumley, S.

    1990-01-01

    We developed a Macintosh II-based, charge-coupled device (CCD), image digitization and analysis system for high-speed, high-resolution quantification of autoradiographic image data. A linear CCD array with 3,500 elements was attached to a precision drive assembly and mounted behind a high-uniformity lens. The drive assembly was used to sweep the array perpendicularly to its axis so that an entire 20 x 25-cm autoradiographic image-containing film could be digitized into 256 gray levels at 50-microns resolution in less than 30 sec. The scanner was interfaced to a Macintosh II computer through a specially constructed NuBus circuit board and software was developed for autoradiographic data analysis. The system was evaluated by scanning individual films multiple times, then measuring the variability of the digital data between the different scans. Image data were found to be virtually noise free. The coefficient of variation averaged less than 1%, a value significantly exceeding the accuracy of both high-speed, low-resolution, video camera (VC) systems and low-speed, high-resolution, rotating drum densitometers (RDD). Thus, the CCD scanner-Macintosh computer analysis system offers the advantage over VC systems of the ability to digitize entire films containing many autoradiograms, but with much greater speed and accuracy than achievable with RDD scanners

  11. Comparison of image features calculated in different dimensions for computer-aided diagnosis of lung nodules

    Science.gov (United States)

    Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.

    2009-02-01

    Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.

  12. Image reconstruction from projections and its application in emission computer tomography

    International Nuclear Information System (INIS)

    Kuba, Attila; Csernay, Laszlo

    1989-01-01

    Computer tomography is an imaging technique for producing cross sectional images by reconstruction from projections. Its two main branches are called transmission and emission computer tomography, TCT and ECT, resp. After an overview of the theory and practice of TCT and ECT, the first Hungarian ECT type MB 9300 SPECT consisting of a gamma camera and Ketronic Medax N computer is described, and its applications to radiological patient observations are discussed briefly. (R.P.) 28 refs.; 4 figs

  13. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  14. Computer-aided diagnosis of liver tumors on computed tomography images.

    Science.gov (United States)

    Chang, Chin-Chen; Chen, Hong-Hao; Chang, Yeun-Chung; Yang, Ming-Yang; Lo, Chung-Ming; Ko, Wei-Chun; Lee, Yee-Fan; Liu, Kao-Lang; Chang, Ruey-Feng

    2017-07-01

    Liver cancer is the tenth most common cancer in the USA, and its incidence has been increasing for several decades. Early detection, diagnosis, and treatment of the disease are very important. Computed tomography (CT) is one of the most common and robust imaging techniques for the detection of liver cancer. CT scanners can provide multiple-phase sequential scans of the whole liver. In this study, we proposed a computer-aided diagnosis (CAD) system to diagnose liver cancer using the features of tumors obtained from multiphase CT images. A total of 71 histologically-proven liver tumors including 49 benign and 22 malignant lesions were evaluated with the proposed CAD system to evaluate its performance. Tumors were identified by the user and then segmented using a region growing algorithm. After tumor segmentation, three kinds of features were obtained for each tumor, including texture, shape, and kinetic curve. The texture was quantified using 3 dimensional (3-D) texture data of the tumor based on the grey level co-occurrence matrix (GLCM). Compactness, margin, and an elliptic model were used to describe the 3-D shape of the tumor. The kinetic curve was established from each phase of tumor and represented as variations in density between each phase. Backward elimination was used to select the best combination of features, and binary logistic regression analysis was used to classify the tumors with leave-one-out cross validation. The accuracy and sensitivity for the texture were 71.82% and 68.18%, respectively, which were better than for the shape and kinetic curve under closed specificity. Combining all of the features achieved the highest accuracy (58/71, 81.69%), sensitivity (18/22, 81.82%), and specificity (40/49, 81.63%). The Az value of combining all features was 0.8713. Combining texture, shape, and kinetic curve features may be able to differentiate benign from malignant tumors in the liver using our proposed CAD system. Copyright © 2017 Elsevier B.V. All

  15. Role of image analysis in quantitative characterisation of nuclear fuel materials

    International Nuclear Information System (INIS)

    Dubey, J.N.; Rao, T.S.; Pandey, V.D.; Majumdar, S.

    2005-01-01

    Image analysis is one of the important techniques, widely used for materials characterization. It provides the quantitative estimation of the microstructural features present in the material. This information is very much valuable for finding out the criteria for taking up the fuel for high burn up. Radiometallurgy Division has been carrying out development and fabrication of plutonium related fuels for different type of reactors viz. Purnima, Fast Breeder Test Reactor (FBTR), Prototype Fast Breeder Reactor (PFBR), Boiling Water Reactor (BWR), Advanced Heavy Water Reactor (AHWR), Pressurised Heavy Water Reactor (PHWR) and KAMINI Reactor. Image analysis has been carried out on microstructures of PHWR, AHWR, FBTR and KAMINI fuels. Samples were prepared as per standard ASTM metallographic procedure. Digital images of the microstructure of these specimens were obtained using CCD camera, attached to the optical microscope. These images are stores on computer and used for detection and analysis of features of interest with image analysis software. Quantitative image analysis technique has been standardised and used for finding put type of the porosity, its size, shape and distribution in the above sintered oxide and carbide fuels. This technique has also been used for quantitative estimation of different phases present in KAMINI fuel. Image analysis results have been summarised and presented in this paper. (author)

  16. Optical computed tomography for imaging the breast: first look

    Science.gov (United States)

    Grable, Richard J.; Ponder, Steven L.; Gkanatsios, Nikolaos A.; Dieckmann, William; Olivier, Patrick F.; Wake, Robert H.; Zeng, Yueping

    2000-07-01

    The purpose of the study is to compare computed tomography optical imaging with traditional breast imaging techniques. Images produced by computed tomography laser mammography (CTLMTM) scanner are compared with images obtained from mammography, and in some cases ultrasound and/or magnetic resonance imaging (MRI). During the CTLM procedure, a near infrared laser irradiates the breast and an array of photodiodes detectors records light scattered through the breast tissue. The laser and detectors rotate synchronously around the breast to acquire a series of slice data along the coronal place. The procedure is performed without any breast compression or optical matching fluid. Cross-sectional slices of the breast are produced using a reconstruction algorithm. Reconstruction based on the diffusion theory is used to produce cross-sectional slices of the breast. Multiple slice images are combined to produce a three dimensional volumetric array of the imaged breast. This array is used to derive axial and sagittal images of the breast corresponding to cranio-caudal and medio-lateral images used in mammography. Over 200 women and 3 men have been scanned in clinical trials. The most obvious features seen in images produced by the optical tomography scanner are vascularization and significant lesions. Breast features caused by fibrocystic changes and cysts are less obvious. Breast density does not appear to be a significant factor in the quality of the image. We see correlation of the optical image structure with that seen with traditional breast imaging techniques. Further testing is being conducted to explore the sensitivity and specificity of optical tomography of the breast.

  17. The image database management system of teaching file using personal computer

    International Nuclear Information System (INIS)

    Shin, M. J.; Kim, G. W.; Chun, T. J.; Ahn, W. H.; Baik, S. K.; Choi, H. Y.; Kim, B. G.

    1995-01-01

    For the systemic management and easy using of teaching file in radiology department, the authors tried to do the setup of a database management system of teaching file using personal computer. We used a personal computer (IBM PC compatible, 486DX2) including a image capture card(Window vision, Dooin Elect, Seoul, Korea) and video camera recorder (8mm, CCD-TR105, Sony, Tokyo, Japan) for the acquisition and storage of images. We developed the database program by using Foxpro for Window 2.6(Microsoft, Seattle, USA) executed in the Window 3.1 (Microsoft, Seattle, USA). Each datum consisted of hospital number, name, sex, age, examination date, keyword, radiologic examination modalities, final diagnosis, radiologic findings, references and representative images. The images were acquired and stored as bitmap format (8 bitmap, 540 X 390 ∼ 545 X 414, 256 gray scale) and displayed on the 17 inch-flat monitor(1024 X 768, Samtron, Seoul, Korea). Without special devices, the images acquisition and storage could be done on the reading viewbox, simply. The image quality on the computer's monitor was less than the one of original film on the viewbox, but generally the characteristics of each lesions could be differentiated. Easy retrieval of data was possible for the purpose of teaching file system. Without high cost appliances, we could consummate the image database system of teaching file using personal computer with relatively inexpensive method

  18. Value of image fusion using single photon emission computed tomography with integrated low dose computed tomography in comparison with a retrospective voxel-based method in neuroendocrine tumours

    International Nuclear Information System (INIS)

    Amthauer, H.; Denecke, T.; Ruf, J.; Gutberlet, M.; Felix, R.; Lemke, A.J.; Rohlfing, T.; Boehmig, M.; Ploeckinger, U.

    2005-01-01

    The objective was the evaluation of single photon emission computed tomography (SPECT) with integrated low dose computed tomography (CT) in comparison with a retrospective fusion of SPECT and high-resolution CT and a side-by-side analysis for lesion localisation in patients with neuroendocrine tumours. Twenty-seven patients were examined by multidetector CT. Additionally, as part of somatostatin receptor scintigraphy (SRS), an integrated SPECT-CT was performed. SPECT and CT data were fused using software with a registration algorithm based on normalised mutual information. The reliability of the topographic assignment of lesions in SPECT-CT, retrospective fusion and side-by-side analysis was evaluated by two blinded readers. Two patients were not enrolled in the final analysis because of misregistrations in the retrospective fusion. Eighty-seven foci were included in the analysis. For the anatomical assignment of foci, SPECT-CT and retrospective fusion revealed overall accuracies of 91 and 94% (side-by-side analysis 86%). The correct identification of foci as lymph node manifestations (n=25) was more accurate by retrospective fusion (88%) than from SPECT-CT images (76%) or by side-by-side analysis (60%). Both modalities of image fusion appear to be well suited for the localisation of SRS foci and are superior to side-by-side analysis of non-fused images especially concerning lymph node manifestations. (orig.)

  19. Computational assessment of visual search strategies in volumetric medical images.

    Science.gov (United States)

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning."

  20. Computer-based endoscopic image-processing technology for endourology and laparoscopic surgery

    International Nuclear Information System (INIS)

    Igarashi, Tatsuo; Suzuki, Hiroyoshi; Naya, Yukio

    2009-01-01

    Endourology and laparoscopic surgery are evolving in accordance with developments in instrumentation and progress in surgical technique. Recent advances in computer and image-processing technology have enabled novel images to be created from conventional endoscopic and laparoscopic video images. Such technology harbors the potential to advance endourology and laparoscopic surgery by adding new value and function to the endoscope. The panoramic and three-dimensional images created by computer processing are two outstanding features that can address the shortcomings of conventional endoscopy and laparoscopy, such as narrow field of view, lack of depth cue, and discontinuous information. The wide panoramic images show an anatomical map' of the abdominal cavity and hollow organs with high brightness and resolution, as the images are collected from video images taken in a close-up manner. To assist in laparoscopic surgery, especially in suturing, a three-dimensional movie can be obtained by enhancing movement parallax using a conventional monocular laparoscope. In tubular organs such as the prostatic urethra, reconstruction of three-dimensional structure can be achieved, implying the possibility of a liquid dynamic model for assessing local urethral resistance in urination. Computer-based processing of endoscopic images will establish new tools for endourology and laparoscopic surgery in the near future. (author)

  1. Image quality in coronary computed tomography angiography

    DEFF Research Database (Denmark)

    Precht, Helle; Gerke, Oke; Thygesen, Jesper

    2018-01-01

    Background Computed tomography (CT) technology is rapidly evolving and software solution developed to optimize image quality and/or lower radiation dose. Purpose To investigate the influence of adaptive statistical iterative reconstruction (ASIR) at different radiation doses in coronary CT...

  2. Computer vision for biomedical image applications. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yanxi [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Computer Science, The Robotics Institute; Jiang, Tianzi [Chinese Academy of Sciences, Beijing (China). National Lab. of Pattern Recognition, Inst. of Automation; Zhang, Changshui (eds.) [Tsinghua Univ., Beijing, BJ (China). Dept. of Automation

    2005-07-01

    This book constitutes the refereed proceedings of the First International Workshop on Computer Vision for Biomedical Image Applications: Current Techniques and Future Trends, CVBIA 2005, held in Beijing, China, in October 2005 within the scope of ICCV 20. (orig.)

  3. Quantification of the radionuclide image: Theoretical concepts and the role of the computer

    International Nuclear Information System (INIS)

    Rabinowitz, A.; Wexler, J.P.; Blaufox, M.D.

    1984-01-01

    The purpose of this chapter is to provide the reader with the basic fundamentals for understanding dynamic and quantitative imaging studies. The computer, which is a basic requirement for the optimum generation and analysis of these data, is discussed here. These studies require an understanding of physiologic and mathematic principles and of the workings of the machine that is used to record them

  4. Image analysis of ocular fundus for retinopathy characterization

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  5. Practical considerations of image analysis and quantification of signal transduction IHC staining.

    Science.gov (United States)

    Grunkin, Michael; Raundahl, Jakob; Foged, Niels T

    2011-01-01

    The dramatic increase in computer processing power in combination with the availability of high-quality digital cameras during the last 10 years has fertilized the grounds for quantitative microscopy based on digital image analysis. With the present introduction of robust scanners for whole slide imaging in both research and routine, the benefits of automation and objectivity in the analysis of tissue sections will be even more obvious. For in situ studies of signal transduction, the combination of tissue microarrays, immunohistochemistry, digital imaging, and quantitative image analysis will be central operations. However, immunohistochemistry is a multistep procedure including a lot of technical pitfalls leading to intra- and interlaboratory variability of its outcome. The resulting variations in staining intensity and disruption of original morphology are an extra challenge for the image analysis software, which therefore preferably should be dedicated to the detection and quantification of histomorphometrical end points.

  6. Nuclear imaging using Fuji Computed Radiography

    International Nuclear Information System (INIS)

    Yodono, Hiraku; Tarusawa, Nobuko; Katto, Keiichi; Miyakawa, Takayoshi; Watanabe, Sadao; Shinozaki, Tatsuyo

    1988-01-01

    We studied the feasibility of the Fuji Computed Radiography system (FCR) in nuclear medicine. The basic principle of the system is the conversion of the X-ray energy pattern into digital signals utilizing scanning laser stimulated luminescence. A Rollo phantom filled with 12 mCi of Tc-99m pertechnetate was used in this study. In imaging by the FCR, a low energy high resolution parallel hole collimator for a gamma camera was placed over the phantom and photons through the collimator were stored on a single imaging plate (IP) or 3 IPs covered by the lead plate, 0.3 mm in thickness. For imaging, it took 30 minutes by a single IP and 20 minutes by 3 IPs with the lead plate respectively. Each image of the phantom by the FCR was compared with that of obtained by a gamma camera. The image by a single IP was inferior in quality than that of by a gamma camera. However using 3 IPs with the lead plate, same quality image as by a gamma camera was obtained. The image by 3 IPs is similar to that of by 3 IPs with the lead plate. Based on the results, we performed liver and lung imaging by FCR using 3 IPs. The imaging time is twenty minutes. The images obtained with FCR are as good as the scinticamera image. However it has two major flawes in that the sensitivity is poor and the imaging time is long. Furthermore, at present this method can only be employed for static imaging. However we feel that future improvements in the FCR system will overcome these problems. (author)

  7. A systematic review of computer-assisted diagnosis in diagnostic cancer imaging

    International Nuclear Information System (INIS)

    Eadie, Leila H.; Taylor, Paul; Gibson, Adam P.

    2012-01-01

    Objectives: This study reviews the evidence for the effectiveness of computer-assisted diagnosis (CAD) in cancer imaging. Diagnostic applications were studied to estimate the impact of CAD on radiologists’ detection and diagnosis of cancer lesions. Methods: Online databases were searched and 48 studies from 1992 to 2010 were included: 16 with radiologists using CAD to detect lesions (CADe) and 32 with radiologists using CAD to classify or diagnose lesions (CADx). Weighted means, statistics, summary receiver operating characteristics (SROC) curves, and related measures were used for analysis. Results: There is evidence that CADx significantly improves diagnosis in mammography and breast ultrasound. In contrast, studies of CADx applied to lung CT and dermatologic imaging show an adverse impact on diagnosis. Overall, there is no evidence of a benefit due to the use of CADe. The area under the SROC curves was not significantly increased for radiologists using either CADe or CADx. Conclusions: From this analysis it seems CADx can offer some benefit to radiologists in specific imaging applications for breast cancer diagnosis. There is no evidence of a beneficial effect in other applications of CAD and some evidence of a detrimental one.

  8. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  9. A review of image quality assessment methods with application to computational photography

    Science.gov (United States)

    Maître, Henri

    2015-12-01

    Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.

  10. Comparison of clinical and physical measures of image quality in chest and pelvis computed radiography at different tube voltages

    International Nuclear Information System (INIS)

    Sandborg, Michael; Tingberg, Anders; Ullman, Gustaf; Dance, David R.; Alm Carlsson, Gudrun

    2006-01-01

    The aim of this work was to study the dependence of image quality in digital chest and pelvis radiography on tube voltage, and to explore correlations between clinical and physical measures of image quality. The effect on image quality of tube voltage in these two examinations was assessed using two methods. The first method relies on radiologists' observations of images of an anthropomorphic phantom, and the second method was based on computer modeling of the imaging system using an anthropomorphic voxel phantom. The tube voltage was varied within a broad range (50-150 kV), including those values typically used with screen-film radiography. The tube charge was altered so that the same effective dose was achieved for each projection. Two x-ray units were employed using a computed radiography (CR) image detector with standard tube filtration and antiscatter device. Clinical image quality was assessed by a group of radiologists using a visual grading analysis (VGA) technique based on the revised CEC image criteria. Physical image quality was derived from a Monte Carlo computer model in terms of the signal-to-noise ratio, SNR, of anatomical structures corresponding to the image criteria. Both the VGAS (visual grading analysis score) and SNR decrease with increasing tube voltage in both chest PA and pelvis AP examinations, indicating superior performance if lower tube voltages are employed. Hence, a positive correlation between clinical and physical measures of image quality was found. The pros and cons of using lower tube voltages with CR digital radiography than typically used in analog screen-film radiography are discussed, as well as the relevance of using VGAS and quantum-noise SNR as measures of image quality in pelvis and chest radiography

  11. Intraoperative cone-beam computed tomography and multi-slice computed tomography in temporal bone imaging for surgical treatment.

    Science.gov (United States)

    Erovic, Boban M; Chan, Harley H L; Daly, Michael J; Pothier, David D; Yu, Eugene; Coulson, Chris; Lai, Philip; Irish, Jonathan C

    2014-01-01

    Conventional computed tomography (CT) imaging is the standard imaging technique for temporal bone diseases, whereas cone-beam CT (CBCT) imaging is a very fast imaging tool with a significant less radiation dose compared with conventional CT. We hypothesize that a system for intraoperative cone-beam CT provides comparable image quality to diagnostic CT for identifying temporal bone anatomical landmarks in cadaveric specimens. Cross-sectional study. University tertiary care facility. Twenty cadaveric temporal bones were affixed into a head phantom and scanned with both a prototype cone-beam CT C-arm and multislice helical CT. Imaging performance was evaluated by 3 otologic surgeons and 1 head and neck radiologist. Participants were presented images in a randomized order and completed landmark identification questionnaires covering 21 structures. CBCT and multislice CT have comparable performance in identifying temporal structures. Three otologic surgeons indicated that CBCT provided statistically equivalent performance for 19 of 21 landmarks, with CBCT superior to CT for the chorda tympani and inferior for the crura of the stapes. Subgroup analysis showed that CBCT performed superiorly for temporal bone structures compared with CT. The radiologist rated CBCT and CT as statistically equivalent for 18 of 21 landmarks, with CT superior to CBCT for the crura of stapes, chorda tympani, and sigmoid sinus. CBCT provides comparable image quality to conventional CT for temporal bone anatomical sites in cadaveric specimens. Clinical applications of low-dose CBCT imaging in surgical planning, intraoperative guidance, and postoperative assessment are promising but require further investigation.

  12. Registration and analysis for images couple : application to mammograms

    OpenAIRE

    Boucher, Arnaud

    2014-01-01

    Advisor: Nicole Vincent. Date and location of PhD thesis defense: 10 January 2013, University of Paris Descartes In this thesis, the problem addressed is the development of a computer-aided diagnosis system (CAD) based on conjoint analysis of several images, and therefore on the comparison of these medical images. The particularity of our approach is to look for evolutions or aberrant new tissues in a given set, rather than attempting to characterize, with a strong a priori, the type of ti...

  13. Medical Imaging Informatics: Towards a Personalized Computational Patient.

    Science.gov (United States)

    Ayache, N

    2016-05-20

    Medical Imaging Informatics has become a fast evolving discipline at the crossing of Informatics, Computational Sciences, and Medicine that is profoundly changing medical practices, for the patients' benefit.

  14. SimpleITK Image-Analysis Notebooks: a Collaborative Environment for Education and Reproducible Research.

    Science.gov (United States)

    Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard

    2018-06-01

    Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .

  15. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J; Nishikawa, R [University of Pittsburgh, Pittsburgh, PA (United States); Reiser, I [The University of Chicago, Chicago, IL (United States); Boone, J [UC Davis Medical Center, Sacramento, CA (United States)

    2015-06-15

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benign or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification

  16. Analysis and improvement of a chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wei Pengcheng

    2009-01-01

    The security of digital image attracts much attention recently. In Guan et al. [Guan Z, Huang F, Guan W. Chaos-based image encryption algorithm. Phys Lett A 2005; 346: 153-7.], a chaos-based image encryption algorithm has been proposed. In this paper, the cause of potential flaws in the original algorithm is analyzed in detail, and then the corresponding enhancement measures are proposed. Both theoretical analysis and computer simulation indicate that the improved algorithm can overcome these flaws and maintain all the merits of the original one.

  17. Spiral Computed Tomographic Imaging Related to Computerized Ultrasonographic Images of Carotid Plaque Morphology and Histology

    DEFF Research Database (Denmark)

    Grønholdt, Marie-Louise M.; Wagner, Aase; Wiebe, Britt M.

    2001-01-01

    Echolucency of carotid atherosclerotic plaques, as evaluated by computerized B-mode ultrasonographic images, has been associated with an increased incidence of brain infarcts on cerebral computed tomographic scans. We tested the hypotheses that characterization of carotid plaques on spiral comput...

  18. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  19. Paediatric x-ray radiation dose reduction and image quality analysis.

    Science.gov (United States)

    Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H

    2013-09-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.

  20. Paediatric x-ray radiation dose reduction and image quality analysis

    International Nuclear Information System (INIS)

    Martin, L; Ruddlesden, R; Mistry, T; Starritt, H; Makepeace, C; Robinson, L

    2013-01-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%–55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children. (paper)

  1. Computational imaging using lightweight diffractive-refractive optics

    KAUST Repository

    Peng, Yifan

    2015-11-23

    Diffractive optical elements (DOE) show great promise for imaging optics that are thinner and more lightweight than conventional refractive lenses while preserving their light efficiency. Unfortunately, severe spectral dispersion currently limits the use of DOEs in consumer-level lens design. In this article, we jointly design lightweight diffractive-refractive optics and post-processing algorithms to enable imaging under white light illumination. Using the Fresnel lens as a general platform, we show three phase-plate designs, including a super-thin stacked plate design, a diffractive-refractive-hybrid lens, and a phase coded-aperture lens. Combined with cross-channel deconvolution algorithm, both spherical and chromatic aberrations are corrected. Experimental results indicate that using our computational imaging approach, diffractive-refractive optics is an alternative candidate to build light efficient and thin optics for white light imaging.

  2. Computational imaging using lightweight diffractive-refractive optics

    KAUST Repository

    Peng, Yifan; Fu, Qiang; Amata, Hadi; Su, Shuochen; Heide, Felix; Heidrich, Wolfgang

    2015-01-01

    Diffractive optical elements (DOE) show great promise for imaging optics that are thinner and more lightweight than conventional refractive lenses while preserving their light efficiency. Unfortunately, severe spectral dispersion currently limits the use of DOEs in consumer-level lens design. In this article, we jointly design lightweight diffractive-refractive optics and post-processing algorithms to enable imaging under white light illumination. Using the Fresnel lens as a general platform, we show three phase-plate designs, including a super-thin stacked plate design, a diffractive-refractive-hybrid lens, and a phase coded-aperture lens. Combined with cross-channel deconvolution algorithm, both spherical and chromatic aberrations are corrected. Experimental results indicate that using our computational imaging approach, diffractive-refractive optics is an alternative candidate to build light efficient and thin optics for white light imaging.

  3. Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective

    Science.gov (United States)

    Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.

    2016-06-01

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).

  4. Superresolution Imaging Using Resonant Multiples and Plane-wave Migration Velocity Analysis

    KAUST Repository

    Guo, Bowen

    2017-08-28

    Seismic imaging is a technique that uses seismic echoes to map and detect underground geological structures. The conventional seismic image has the resolution limit of λ/2, where λ is the wavelength associated with the seismic waves propagating in the subsurface. To exceed this resolution limit, this thesis develops a new imaging method using resonant multiples, which produces superresolution images with twice or even more the spatial resolution compared to the conventional primary reflection image. A resonant multiple is defined as a seismic reflection that revisits the same subsurface location along coincident reflection raypath. This reverberated raypath is the reason for superresolution imaging because it increases the differences in reflection times associated with subtle changes in the spatial location of the reflector. For the practical implementation of superresolution imaging, I develop a post-stack migration technique that first enhances the signal-to-noise ratios (SNRs) of resonant multiples by a moveout-correction stacking method, and then migrates the post-stacked resonant multiples with the associated Kirchhoff or wave-equation migration formula. I show with synthetic and field data examples that the first-order resonant multiple image has about twice the spatial resolution compared to the primary reflection image. Besides resolution, the correct estimate of the subsurface velocity is crucial for determining the correct depth of reflectors. Towards this goal, wave-equation migration velocity analysis (WEMVA) is an image-domain method which inverts for the velocity model that maximizes the similarity of common image gathers (CIGs). Conventional WEMVA based on subsurface-offset, angle domain or time-lag CIGs requires significant computational and memory resources because it computes higher dimensional migration images in the extended image domain. To mitigate this problem, I present a new WEMVA method using plane-wave CIGs. Plane-wave CIGs reduce the

  5. Development of a Multi-Centre Clinical Trial Data Archiving and Analysis Platform for Functional Imaging

    Science.gov (United States)

    Driscoll, Brandon; Jaffray, David; Coolens, Catherine

    2014-03-01

    Purpose: To provide clinicians & researchers participating in multi-centre clinical trials with a central repository for large volume dynamic imaging data as well as a set of tools for providing end-to-end testing and image analysis standards of practice. Methods: There are three main pieces to the data archiving and analysis system; the PACS server, the data analysis computer(s) and the high-speed networks that connect them. Each clinical trial is anonymized using a customizable anonymizer and is stored on a PACS only accessible by AE title access control. The remote analysis station consists of a single virtual machine per trial running on a powerful PC supporting multiple simultaneous instances. Imaging data management and analysis is performed within ClearCanvas Workstation® using custom designed plug-ins for kinetic modelling (The DCE-Tool®), quality assurance (The DCE-QA Tool) and RECIST. Results: A framework has been set up currently serving seven clinical trials spanning five hospitals with three more trials to be added over the next six months. After initial rapid image transfer (+ 2 MB/s), all data analysis is done server side making it robust and rapid. This has provided the ability to perform computationally expensive operations such as voxel-wise kinetic modelling on very large data archives (+20 GB/50k images/patient) remotely with minimal end-user hardware. Conclusions: This system is currently in its proof of concept stage but has been used successfully to send and analyze data from remote hospitals. Next steps will involve scaling up the system with a more powerful PACS and multiple high powered analysis machines as well as adding real-time review capabilities.

  6. Development of a Multi-Centre Clinical Trial Data Archiving and Analysis Platform for Functional Imaging

    International Nuclear Information System (INIS)

    Driscoll, Brandon; Jaffray, David; Coolens, Catherine

    2014-01-01

    Purpose: To provide clinicians and researchers participating in multi-centre clinical trials with a central repository for large volume dynamic imaging data as well as a set of tools for providing end-to-end testing and image analysis standards of practice. Methods: There are three main pieces to the data archiving and analysis system; the PACS server, the data analysis computer(s) and the high-speed networks that connect them. Each clinical trial is anonymized using a customizable anonymizer and is stored on a PACS only accessible by AE title access control. The remote analysis station consists of a single virtual machine per trial running on a powerful PC supporting multiple simultaneous instances. Imaging data management and analysis is performed within ClearCanvas Workstation® using custom designed plug-ins for kinetic modelling (The DCE-Tool®), quality assurance (The DCE-QA Tool) and RECIST. Results: A framework has been set up currently serving seven clinical trials spanning five hospitals with three more trials to be added over the next six months. After initial rapid image transfer (+ 2 MB/s), all data analysis is done server side making it robust and rapid. This has provided the ability to perform computationally expensive operations such as voxel-wise kinetic modelling on very large data archives (+20 GB/50k images/patient) remotely with minimal end-user hardware. Conclusions: This system is currently in its proof of concept stage but has been used successfully to send and analyze data from remote hospitals. Next steps will involve scaling up the system with a more powerful PACS and multiple high powered analysis machines as well as adding real-time review capabilities.

  7. Artificial Intelligence Methods in Analysis of Morphology of Selected Structures in Medical Images

    Directory of Open Access Journals (Sweden)

    Ryszard Tadeusiewicz

    2001-01-01

    Full Text Available The goal of this paper is the presentation of the possibilities of application of syntactic method of computer image analysis for recognition of local stenoscs of coronary arteries lumen and detection of pathological signs in upper parts of ureter ducts and renal calyxes. Analysis of correct morphology of these structures is possible thanks to thc application of sequence and tree methods from the group of syntactic methods of pattern recognition. In the case of analysis of coronary arteries images the main objective is computer-aided early diagnosis of different form of ischemic cardiovascular diseases. Such diseases may reveal in the form of stable or unstable disturbances of heart rhythm or infarction. ln analysis of kidney radiograms the main goal is recognition of local irregularities in ureter lumens and examination of morphology of renal pelvis and calyxes.

  8. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  9. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  10. Development of computer-controlled ultrasonic image processing system for severe accidents research

    International Nuclear Information System (INIS)

    Koo, Kil Mo; Kang, Kyung Ho; Kim, Jong Tai; Kim, Jong Whan; Cho, Young Ro; Ha, Kwang Soon; Park, Rae Jun; Kim, Sang Baik; Kim, Hee Dong; Sim, Chul Moo

    2000-07-01

    In order to verify in-vessel corium cooling mechanism, LAVA(Lower-plenum Arrested Vessel Attack) experiment is being performed as a first stage proof of principle test. The aims of this study are to find a gap formation between corium melt and reactor lower head vessel, to verify the principle of the gap formation and to analyze the effect of the gap formation on the thermal behavior of corium melt and the lower plenum. This report aims at developing a computer controlled image signal processing system which is able to improve visualization and to measure the gap distribution with 3-dimensional planar image using a time domain signal analysis method as a part of the ultrasonic pulse echo methods and a computerized position control system. An image signal processing system is developed by independently developing an ultrasonic image signal processing technique and a PC controlled position control system and then combining both systems

  11. Development of computer-controlled ultrasonic image processing system for severe accidents research

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Kil Mo; Kang, Kyung Ho; Kim, Jong Tai; Kim, Jong Whan; Cho, Young Ro; Ha, Kwang Soon; Park, Rae Jun; Kim, Sang Baik; Kim, Hee Dong; Sim, Chul Moo

    2000-07-01

    In order to verify in-vessel corium cooling mechanism, LAVA(Lower-plenum Arrested Vessel Attack) experiment is being performed as a first stage proof of principle test. The aims of this study are to find a gap formation between corium melt and reactor lower head vessel, to verify the principle of the gap formation and to analyze the effect of the gap formation on the thermal behavior of corium melt and the lower plenum. This report aims at developing a computer controlled image signal processing system which is able to improve visualization and to measure the gap distribution with 3-dimensional planar image using a time domain signal analysis method as a part of the ultrasonic pulse echo methods and a computerized position control system. An image signal processing system is developed by independently developing an ultrasonic image signal processing technique and a PC controlled position control system and then combining both systems.

  12. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  13. VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.

    Science.gov (United States)

    Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro

    2016-01-01

    In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the

  14. Imaging of Anal Fistulas: Comparison of Computed Tomographic Fistulography and Magnetic Resonance Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Changhu [Shandong Medical Imaging Research Institute, Shandong University, Jinan 250021 (China); Lu, Yongchao [Traditional Chinese Medicine Department, Provincial Hospital Affiliated to Shandong University, Jinan 250021 (China); Zhao, Bin [Shandong Medical Imaging Research Institute, Shandong University, Jinan 250021 (China); Du, Yinglin [Shandong Provincial Center for Disease Control and Prevention, Public Health Institute, Jinan 250014 (China); Wang, Cuiyan [Shandong Medical Imaging Research Institute, Shandong University, Jinan 250021 (China); Jiang, Wanli [Department of Radiology, Taishan Medical University, Taian 271000 (China)

    2014-07-01

    The primary importance of magnetic resonance (MR) imaging in evaluating anal fistulas lies in its ability to demonstrate hidden areas of sepsis and secondary extensions in patients with fistula in ano. MR imaging is relatively expensive, so there are many healthcare systems worldwide where access to MR imaging remains restricted. Until recently, computed tomography (CT) has played a limited role in imaging fistula in ano, largely owing to its poor resolution of soft tissue. In this article, the different imaging features of the CT and MRI are compared to demonstrate the relative accuracy of CT fistulography for the preoperative assessment of fistula in ano. CT fistulography and MR imaging have their own advantages for preoperative evaluation of perianal fistula, and can be applied to complement one another when necessary.

  15. SMART USE OF COMPUTER-AIDED SPERM ANALYSIS (CASA) TO CHARACTERIZE SPERM MOTION

    Science.gov (United States)

    Computer-aided sperm analysis (CASA) has evolved over the past fifteen years to provide an objective, practical means of measuring and characterizing the velocity and parttern of sperm motion. CASA instruments use video frame-grabber boards to capture multiple images of spermato...

  16. Kimura's disease: imaging patterns on computed tomography

    International Nuclear Information System (INIS)

    Gopinathan, Anil; Tan, T.Y.

    2009-01-01

    Aim: To define the role of computed tomography (CT) in identifying and classifying the imaging patterns of diagnostic value in Kimura's disease of the head and neck. Methods: A retrospective study was undertaken comprising 13 patients with histopathological evidence of Kimura's disease. The patients' clinical and pathological records were reviewed against a detailed analysis of their CT images performed from the base of the skull to the arch of the aorta. Results: Both well-defined, nodular masses, as well as ill-defined plaque-like infiltrative masses were seen in the subcutaneous tissue of the head and neck region. All patients had lesions adjacent to the major salivary glands. The parotid gland was affected in 10 of the 13 cases and the submandibular gland was affected in the rest. Contrast enhancement was variable. More than half of the cases had associated lymphadenopathy. Some of them showed atrophy of the skin and subcutaneous fat overlying the subcutaneous masses. Blood eosinophilia was a consistent feature in all the cases. Conclusion: The patterns of distribution, morphology, and enhancement of the lesions in Kimura's disease that can be demonstrated at CT, enables a confident, non-invasive diagnosis of this condition, in an appropriate clinical context.

  17. Quantitative Analysis in Nuclear Medicine Imaging

    CERN Document Server

    2006-01-01

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases. As effort has, therefore, been made to place the reviews provided in this book in a broader context. The effort to do this is reflected by the inclusion of introductory chapters that address basic principles of nuclear medicine imaging, followed by overview of issues that are closely related to quantitative nuclear imaging and its potential role in diagnostic and therapeutic applications. ...

  18. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    Energy Technology Data Exchange (ETDEWEB)

    Qin, J; Bauer, M A, E-mail: qin.jinhui@gmail.com, E-mail: bauer@uwo.ca [Computer Science Department, University of Western Ontario, London, ON N6A 5B7 (Canada)

    2010-11-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  19. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    International Nuclear Information System (INIS)

    Qin, J; Bauer, M A

    2010-01-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  20. A digital imaging teaching file by using the internet, HTML and personal computers

    International Nuclear Information System (INIS)

    Chun, Tong Jin; Jeon, Eun Ju; Baek, Ho Gil; Kang, Eun Joo; Baik, Seung Kug; Choi, Han Yong; Kim, Bong Ki

    1996-01-01

    A film-based teaching file takes up space and the need to search through such a file places limits on the extent to which it is likely to be used. Furthermore it is not easy for doctors in a medium-sized hospital to experience a variety of cases, and so for these reasons we created an easy-to-use digital imaging teaching file with HTML(Hypertext Markup Language) and downloaded images via World Wide Web(WWW) services on the Internet. This was suitable for use by computer novices. We used WWW internet services as a resource for various images and three different IMB-PC compatible computers(386DX, 486DX-II, and Pentium) in downloading the images and in developing a digitalized teaching file. These computers were connected with the Internet through a high speed dial-up modem(28.8Kbps) and to navigate the Internet. Twinsock and Netscape were used. 3.0, Korean word processing software, was used to create HTML(Hypertext Markup Language) files and the downloaded images were linked to the HTML files. In this way, a digital imaging teaching file program was created. Access to a Web service via the Internet required a high speed computer(at least 486DX II with 8MB RAM) for comfortabel use; this also ensured that the quality of downloaded images was not degraded during downloading and that these were good enough to use as a teaching file. The time needed to retrieve the text and related images depends on the size of the file, the speed of the network, and the network traffic at the time of connection. For computer novices, a digital image teaching file using HTML is easy to use. Our method of creating a digital imaging teaching file by using Internet and HTML would be easy to create and radiologists with little computer experience who want to study various digital radiologic imaging cases would find it easy to use

  1. DE-BLURRING SINGLE PHOTON EMISSION COMPUTED TOMOGRAPHY IMAGES USING WAVELET DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Neethu M. Sasi

    2016-02-01

    Full Text Available Single photon emission computed tomography imaging is a popular nuclear medicine imaging technique which generates images by detecting radiations emitted by radioactive isotopes injected in the human body. Scattering of these emitted radiations introduces blur in this type of images. This paper proposes an image processing technique to enhance cardiac single photon emission computed tomography images by reducing the blur in the image. The algorithm works in two main stages. In the first stage a maximum likelihood estimate of the point spread function and the true image is obtained. In the second stage Lucy Richardson algorithm is applied on the selected wavelet coefficients of the true image estimate. The significant contribution of this paper is that processing of images is done in the wavelet domain. Pre-filtering is also done as a sub stage to avoid unwanted ringing effects. Real cardiac images are used for the quantitative and qualitative evaluations of the algorithm. Blur metric, peak signal to noise ratio and Tenengrad criterion are used as quantitative measures. Comparison against other existing de-blurring algorithms is also done. The simulation results indicate that the proposed method effectively reduces blur present in the image.

  2. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  3. A review of parallel computing for large-scale remote sensing image mosaicking

    OpenAIRE

    Chen, Lajiao; Ma, Yan; Liu, Peng; Wei, Jingbo; Jie, Wei; He, Jijun

    2015-01-01

    Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further ...

  4. Computing volume potentials for noninvasive imaging of cardiac excitation.

    Science.gov (United States)

    van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W

    2015-03-01

    In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.

  5. Parotid lymphomas - clinical and computed tomogrphic imaging ...

    African Journals Online (AJOL)

    Objective. To review the clinical presentation and computed tomography (CT) imaging characteristics of all parotid lymphomas diagnosed at the study institution over a 7-year period. Design. Retrospective chart review of parotid lymphomas diagnosed between 1997 and 2004. Subjects. A total of 121 patients with parotid ...

  6. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    Science.gov (United States)

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  7. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  8. Sixth Computational Biomechanics for Medicine Workshop

    CERN Document Server

    Nielsen, Poul MF; Miller, Karol; Computational Biomechanics for Medicine : Deformation and Flow

    2012-01-01

    One of the greatest challenges for mechanical engineers is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, biomedical sciences, and medicine. This book is an opportunity for computational biomechanics specialists to present and exchange opinions on the opportunities of applying their techniques to computer-integrated medicine. Computational Biomechanics for Medicine: Deformation and Flow collects the papers from the Sixth Computational Biomechanics for Medicine Workshop held in Toronto in conjunction with the Medical Image Computing and Computer Assisted Intervention conference. The topics covered include: medical image analysis, image-guided surgery, surgical simulation, surgical intervention planning, disease prognosis and diagnostics, injury mechanism analysis, implant and prostheses design, and medical robotics.

  9. Complete chromogen separation and analysis in double immunohistochemical stains using Photoshop-based image analysis.

    Science.gov (United States)

    Lehr, H A; van der Loos, C M; Teeling, P; Gown, A M

    1999-01-01

    Simultaneous detection of two different antigens on paraffin-embedded and frozen tissues can be accomplished by double immunohistochemistry. However, many double chromogen systems suffer from signal overlap, precluding definite signal quantification. To separate and quantitatively analyze the different chromogens, we imported images into a Macintosh computer using a CCD camera attached to a diagnostic microscope and used Photoshop software for the recognition, selection, and separation of colors. We show here that Photoshop-based image analysis allows complete separation of chromogens not only on the basis of their RGB spectral characteristics, but also on the basis of information concerning saturation, hue, and luminosity intrinsic to the digitized images. We demonstrate that Photoshop-based image analysis provides superior results compared to color separation using bandpass filters. Quantification of the individual chromogens is then provided by Photoshop using the Histogram command, which supplies information on the luminosity (corresponding to gray levels of black-and-white images) and on the number of pixels as a measure of spatial distribution. (J Histochem Cytochem 47:119-125, 1999)

  10. Computer tomography, magnetic resonance imaging, and positron emission tomography or positron emission tomography/computer tomography for detection of metastatic lymph nodes in patients with ovarian cancer: A meta-analysis

    International Nuclear Information System (INIS)

    Yuan Ying; Gu Zhaoxiang; Tao Xiaofeng; Liu Shiyuan

    2012-01-01

    Objectives: To compare the diagnostic performances of computed tomography (CT), magnetic resonance (MR) imaging, and positron emission tomography (PET or PET/CT) for detection of metastatic lymph nodes in patients with ovarian cancer. Methods: Relevant studies were identified with MEDLINE and EMBASE from January 1990 to July 2010. We estimated the weighted summary sensitivities, specificities, OR (odds ratio), and summary receiver operating characteristic (sROC) curves of each imaging technique and conducted pair-wise comparisons using the two-sample Z-test. Meta-regression, subgroup analysis, and funnel plots were also performed to explain the between-study heterogeneity. Results: Eighteen eligible studies were included, with a total of 882 patients. PET or PET/CT was a more accurate modality (sensitivity, 73.2%; specificity, 96.7%; OR [odds ratio], 90.32). No significant difference was detected between CT (sensitivity, 42.6%; specificity, 95.0%; OR, 19.87) and MR imaging (sensitivity, 54.7%; specificity, 88.3%; OR, 12.38). Meta-regression analyses and subgroup analyses revealed no statistical difference. Funnel plots with marked asymmetry suggested a publication bias. Conclusion: FDG-PET or FDG-PET/CT is more accurate than CT and MR imaging in the detection of lymph node metastasis in patients with ovarian cancer.

  11. Image-Analysis Based on Seed Phenomics in Sesame

    Directory of Open Access Journals (Sweden)

    Prasad R.

    2014-10-01

    Full Text Available The seed coat (testa structure of twenty-three cultivated (Sesamum indicum L. and six wild sesame (s. occidentale Regel & Heer., S. mulayanum Nair, S. prostratum Retz., S. radiatum Schumach. & Thonn., S. angustifolium (Oliv. Engl. and S. schinzianum Asch germplasm was analyzed from digital and Scanning Electron Microscopy (SEM images with dedicated software using the descriptors for computer based seed image analysis to understand the diversity of seed morphometric traits, which later on can be extended to screen and evaluate improved genotypes of sesame. Seeds of wild sesame species could conveniently be distinguished from cultivated varieties based on shape and architectural analysis. Results indicated discrete ‘cut off values to identify definite shape and contour of seed for a desirable sesame genotype along with the con-ventional practice of selecting lighter colored testa.

  12. Computerized analysis of brain perfusion parameter images

    International Nuclear Information System (INIS)

    Turowski, B.; Haenggi, D.; Wittsack, H.J.; Beck, A.; Aurich, V.

    2007-01-01

    Purpose: The development of a computerized method which allows a direct quantitative comparison of perfusion parameters. The display should allow a clear direct comparison of brain perfusion parameters in different vascular territories and over the course of time. The analysis is intended to be the basis for further evaluation of cerebral vasospasm after subarachnoid hemorrhage (SAH). The method should permit early diagnosis of cerebral vasospasm. Materials and Methods: The Angiotux 2D-ECCET software was developed with a close cooperation between computer scientists and clinicians. Starting from parameter images of brain perfusion, the cortex was marked, segmented and assigned to definite vascular territories. The underlying values were averages for each segment and were displayed in a graph. If a follow-up was available, the mean values of the perfusion parameters were displayed in relation to time. The method was developed under consideration of CT perfusion values but is applicable for other methods of perfusion imaging. Results: Computerized analysis of brain perfusion parameter images allows an immediate comparison of these parameters and follow-up of mean values in a clear and concise manner. Values are related to definite vascular territories. The tabular output facilitates further statistic evaluations. The computerized analysis is precisely reproducible, i. e., repetitions result in exactly the same output. (orig.)

  13. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  14. Low-dose computed tomographic imaging in orbital trauma

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, A.; Whitehouse, R.W. (Manchester Univ. (United Kingdom). Dept. of Diagnostic Radiology)

    1993-08-01

    The authors review findings in 75 computed tomographic (CT) examinations of 66 patients with orbital trauma who were imaged using a low-radiation-dose CT technique. Imaging was performed using a dynamic scan mode and exposure factors of 120 kVp and 80 mAs resulting in a skin dose of 11 mGy with an effective dose-equivalent of 0.22 mSv. Image quality was diagnostic in all cases and excellent in 73 examinations. Soft-tissue abnormalities within the orbit including muscle adhesions were well demonstrated both on primary axial and reconstructed multiplanar images. The benefits of multiplanar reconstructions are stressed and the contribution of soft-tissue injuries to symptomatic diplopia examined. (author).

  15. Improved diagnostic performance of exercise thallium-201 single photon emission computed tomography over planar imaging in the diagnosis of coronary artery disease: a receiver operating characteristic analysis

    International Nuclear Information System (INIS)

    Fintel, D.J.; Links, J.M.; Brinker, J.A.; Frank, T.L.; Parker, M.; Becker, L.C.

    1989-01-01

    Qualitative interpretation of tomographic and planar scintigrams, a five point rating scale and receiver operating characteristic analysis were utilized to compare single photon emission computed tomography and conventional planar imaging of myocardial thallium-201 uptake in the accuracy of the diagnosis of coronary artery disease and individual vessel involvement. One hundred twelve patients undergoing cardiac catheterization and 23 normal volunteers performed symptom-limited treadmill exercise, followed by stress and redistribution imaging by both tomographic and planar techniques, with the order determined randomly. Paired receiver operating characteristic curves revealed that single photon emission computed tomography was more accurate than planar imaging over the entire range of decision thresholds for the overall detection and exclusion of coronary artery disease and involvement of the left anterior descending and left circumflex coronary arteries. Tomography offered relatively greater advantages in male patients and in patients with milder forms of coronary artery disease, who had no prior myocardial infarction, only single vessel involvement or no lesion greater than or equal to 50 to 69%. Tomography did not appear to provide improved diagnosis in women or in detection of disease in the right coronary artery. Although overall detection of coronary artery disease was not improved in patients with prior myocardial infarction, tomography provided improved identification of normal and abnormal vascular regions. These results indicate that single photon emission computed tomography provides improved diagnostic performance compared with planar imaging in many clinical subgroups

  16. Image-based surveillance and security systems using personal computers for device aiming and digital image comparison

    International Nuclear Information System (INIS)

    Quiett, S.; Axtell, L.H.

    1987-01-01

    A detection-type security system using enhanced capability cameras or other imaging devices can aid in maintaining security from long distance and/or for large areas. To do so requires that the imaging device(s) be repeatedly and accurately positioned so that no areas are overlooked. Digital control using personal computers is the simplest method of achieving positional accuracy. The monitoring of large areas and/or a large number of areas also requires that a substantial quantity of visual information be catalogued and evaluated for potential security problems. While security personnel alone are typically used for such monitoring, as the quantity of visual information increases, the likelihood that potential security threats will be missed also increases. The ability of an image-based security system to detect potential security problems can be further increased with the use of selected image processing techniques. Utilizing personal computers for both imaging device position control as well as image processing, surveillance of large areas can be performed by a limited number of individuals with a high level of system confidence

  17. Pictorial review: Electron beam computed tomography and multislice spiral computed tomography for cardiac imaging

    International Nuclear Information System (INIS)

    Lembcke, Alexander; Hein, Patrick A.; Dohmen, Pascal M.; Klessen, Christian; Wiese, Till H.; Hoffmann, Udo; Hamm, Bernd; Enzweiler, Christian N.H.

    2006-01-01

    Electron beam computed tomography (EBCT) revolutionized cardiac imaging by combining a constant high temporal resolution with prospective ECG triggering. For years, EBCT was the primary technique for some non-invasive diagnostic cardiac procedures such as calcium scoring and non-invasive angiography of the coronary arteries. Multislice spiral computed tomography (MSCT) on the other hand significantly advanced cardiac imaging through high volume coverage, improved spatial resolution and retrospective ECG gating. This pictorial review will illustrate the basic differences between both modalities with special emphasis to their image quality. Several experimental and clinical examples demonstrate the strengths and limitations of both imaging modalities in an intraindividual comparison for a broad range of diagnostic applications such as coronary artery calcium scoring, coronary angiography including stent visualization as well as functional assessment of the cardiac ventricles and valves. In general, our examples indicate that EBCT suffers from a number of shortcomings such as limited spatial resolution and a low contrast-to-noise ratio. Thus, EBCT should now only be used in selected cases where a constant high temporal resolution is a crucial issue, such as dynamic (cine) imaging. Due to isotropic submillimeter spatial resolution and retrospective data selection MSCT seems to be the non-invasive method of choice for cardiac imaging in general, and for assessment of the coronary arteries in particular. However, technical developments are still needed to further improve the temporal resolution in MSCT and to reduce the substantial radiation exposure

  18. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    Science.gov (United States)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  19. A qualitative and quantitative analysis of radiation dose and image quality of computed tomography images using adaptive statistical iterative reconstruction.

    Science.gov (United States)

    Hussain, Fahad Ahmed; Mail, Noor; Shamy, Abdulrahman M; Suliman, Alghamdi; Saoudi, Abdelhamid

    2016-05-08

    Image quality is a key issue in radiology, particularly in a clinical setting where it is important to achieve accurate diagnoses while minimizing radiation dose. Some computed tomography (CT) manufacturers have introduced algorithms that claim significant dose reduction. In this study, we assessed CT image quality produced by two reconstruction algorithms provided with GE Healthcare's Discovery 690 Elite positron emission tomography (PET) CT scanner. Image quality was measured for images obtained at various doses with both conventional filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR) algorithms. A stan-dard CT dose index (CTDI) phantom and a pencil ionization chamber were used to measure the CT dose at 120 kVp and an exposure of 260 mAs. Image quality was assessed using two phantoms. CT images of both phantoms were acquired at tube voltage (kV) of 120 with exposures ranging from 25 mAs to 400 mAs. Images were reconstructed using FBP and ASIR ranging from 10% to 100%, then analyzed for noise, low-contrast detectability, contrast-to-noise ratio (CNR), and modulation transfer function (MTF). Noise was 4.6 HU in water phantom images acquired at 260 mAs/FBP 120 kV and 130 mAs/50% ASIR 120 kV. The large objects (fre-quency ASIR, compared to 260 mAs/FBP. The application of ASIR for small objects (frequency >7 lp/cm) showed poor visibility compared to FBP at 260 mAs and even worse for images acquired at less than 130 mAs. ASIR blending more than 50% at low dose tends to reduce contrast of small objects (frequency >7 lp/cm). We concluded that dose reduction and ASIR should be applied with close attention if the objects to be detected or diagnosed are small (frequency > 7 lp/cm). Further investigations are required to correlate the small objects (frequency > 7 lp/cm) to patient anatomy and clinical diagnosis.

  20. Ectopic pregnancy: pictorial essay focusing on computed tomography and magnetic resonance imaging findings

    International Nuclear Information System (INIS)

    Febronio, Eduardo Miguel; Rosas, George de Queiroz; D'Ippolito, Giuseppe

    2012-01-01

    The objective of the present study is to describe key computed tomography and magnetic resonance imaging findings in patients with acute abdominal pain caused by ectopic pregnancy. For this purpose, two radiologists consensually selected and analyzed computed tomography and magnetic resonance imaging studies performed in female patients with acute abdominal pain caused by proven ectopic pregnancy in the period between January 2010 and December 2011. The imaging diagnosis of ectopic pregnancy is usually obtained by ultrasonography, however, with the increasing use of computed tomography and magnetic resonance imaging in the assessment of patients with acute abdomen of gynecological origin it is necessary that the radiologist becomes familiar with the main findings observed at these diagnostic methods. (author)

  1. Ectopic pregnancy: pictorial essay focusing on computed tomography and magnetic resonance imaging findings

    Energy Technology Data Exchange (ETDEWEB)

    Febronio, Eduardo Miguel; Rosas, George de Queiroz; D' Ippolito, Giuseppe [Escola Paulista de Medicina - Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Dept. of Imaging Diagnosis; Cardia, Patricia Prando, E-mail: giuseppe_dr@uol.com.br [Centro Radiologico Campinas, Campinas, SP (Brazil)

    2012-09-15

    The objective of the present study is to describe key computed tomography and magnetic resonance imaging findings in patients with acute abdominal pain caused by ectopic pregnancy. For this purpose, two radiologists consensually selected and analyzed computed tomography and magnetic resonance imaging studies performed in female patients with acute abdominal pain caused by proven ectopic pregnancy in the period between January 2010 and December 2011. The imaging diagnosis of ectopic pregnancy is usually obtained by ultrasonography, however, with the increasing use of computed tomography and magnetic resonance imaging in the assessment of patients with acute abdomen of gynecological origin it is necessary that the radiologist becomes familiar with the main findings observed at these diagnostic methods. (author)

  2. Computational Music Analysis

    DEFF Research Database (Denmark)

    This book provides an in-depth introduction and overview of current research in computational music analysis. Its seventeen chapters, written by leading researchers, collectively represent the diversity as well as the technical and philosophical sophistication of the work being done today...... on well-established theories in music theory and analysis, such as Forte's pitch-class set theory, Schenkerian analysis, the methods of semiotic analysis developed by Ruwet and Nattiez, and Lerdahl and Jackendoff's Generative Theory of Tonal Music. The book is divided into six parts, covering...... music analysis, the book provides an invaluable resource for researchers, teachers and students in music theory and analysis, computer science, music information retrieval and related disciplines. It also provides a state-of-the-art reference for practitioners in the music technology industry....

  3. Noninvasive detection of hepatic lipidosis in dairy cows with calibrated ultrasonographic image analysis.

    NARCIS (Netherlands)

    Starke, A.; Haudum, A.; Weijers, G.; Herzog, K.; Wohlsein, P.; Beyerbach, M.; Korte, C.L. de; Thijssen, J.M.; Rehage, J.

    2010-01-01

    The aim was to test the accuracy of calibrated digital analysis of ultrasonographic hepatic images for diagnosing fatty liver in dairy cows. Digital analysis was performed by means of a novel method, computer-aided ultrasound diagnosis (CAUS), previously published by the authors. This method implies

  4. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))

    1993-05-01

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).

  5. Contribute to quantitative identification of casting defects based on computer analysis of X-ray images

    Directory of Open Access Journals (Sweden)

    Z. Ignaszak

    2007-12-01

    Full Text Available The forecast of structure and properties of casting is based on results of computer simulation of physical processes which are carried out during the casting processes. For the effective using of simulation system it is necessary to validate mathematica-physical models describing process of casting formation and the creation of local discontinues, witch determinate the casting properties.In the paper the proposition for quantitative validation of VP system using solidification casting defects by information sources of II group (methods of NDT was introduced. It was named the VP/RT validation (virtual prototyping/radiographic testing validation. Nowadays identification of casting defects noticeable on X-ray images bases on comparison of X-ray image of casting with relates to the ASTM. The results of this comparison are often not conclusive because based on operator’s subjective assessment. In the paper the system of quantitative identification of iron casting defects on X-ray images and classification this defects to ASTM class is presented. The methods of pattern recognition and machine learning were applied.

  6. Computed tomography with selectable image resolution

    International Nuclear Information System (INIS)

    Dibianca, F.A.; Dallapiazza, D.G.

    1981-01-01

    A computed tomography system x-ray detector has a central group of half-width detector elements and groups of full-width elements on each side of the central group. To obtain x-ray attenuation data for whole body layers, the half-width elements are switched effectively into paralleled pairs so all elements act like full-width elements and an image of normal resolution is obtained. For narrower head layers, the elements in the central group are used as half-width elements so resolution which is twice as great as normal is obtained. The central group is also used in the half-width mode and the outside groups are used in the full-width mode to obtain a high resolution image of a body zone within a full body layer. In one embodiment data signals from the detector are switched by electronic multiplexing and in another embodiment a processor chooses the signals for the various kinds of images that are to be reconstructed. (author)

  7. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    Science.gov (United States)

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  8. Image quality analysis to reduce dental artifacts in head and neck imaging with dual-source computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Ketelsen, D.; Werner, M.K.; Thomas, C.; Tsiflikas, I.; Reimann, A.; Claussen, C.D.; Heuschmid, M. [Tuebingen Univ. (Germany). Abt. fuer Diagnostische und Interventionelle Radiologie; Koitschev, A. [Tuebingen Univ. (Germany). Abt. fuer Hals-Nasen-Ohrenheilkunde

    2009-01-15

    Purpose: Important oropharyngeal structures can be superimposed by metallic artifacts due to dental implants. The aim of this study was to compare the image quality of multiplanar reconstructions and an angulated spiral in dual-source computed tomography (DSCT) of the neck. Materials and Methods: Sixty-two patients were included for neck imaging with DSCT. MPRs from an axial dataset and an additional short spiral parallel to the mouth floor were acquired. Leading anatomical structures were then evaluated with respect to the extent to which they were affected by dental artifacts using a visual scale, ranging from 1 (least artifacts) to 4 (most artifacts). Results: In MPR, 87.1 % of anatomical structures had significant artifacts (3.12 {+-} 0.86), while in angulated slices leading anatomical structures of the oropharynx showed negligible artifacts (1.28 {+-} 0.46). The diagnostic growth due to primarily angulated slices concerning artifact severity was significant (p < 0.01). Conclusion: MPRs are not capable of reducing dental artifacts sufficiently. In patients with dental artifacts overlying the anatomical structures of the oropharynx, an additional short angulated spiral parallel to the floor of the mouth is recommended and should be applied for daily routine. As a result of the static gantry design of DSCT, the use of a flexible head holder is essential. (orig.)

  9. The impact of reorienting cone-beam computed tomographic images in varied head positions on the coordinates of anatomical landmarks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Hun; Jeong, Ho Gul; Hwang, Jae Joon; Lee, Jung Hee; Han, Sang Sun [Dept. of Oral and Maxillofacial Radiology, Yonsei University, College of Dentistry, Seoul (Korea, Republic of)

    2016-06-15

    The aim of this study was to compare the coordinates of anatomical landmarks on cone-beam computed tomographic (CBCT) images in varied head positions before and after reorientation using image analysis software. CBCT images were taken in a normal position and four varied head positions using a dry skull marked with 3 points where gutta percha was fixed. In each of the five radiographic images, reference points were set, 20 anatomical landmarks were identified, and each set of coordinates was calculated. Coordinates in the images from the normally positioned head were compared with those in the images obtained from varied head positions using statistical methods. Post-reorientation coordinates calculated using a three-dimensional image analysis program were also compared to the reference coordinates. In the original images, statistically significant differences were found between coordinates in the normal-position and varied-position images. However, post-reorientation, no statistically significant differences were found between coordinates in the normal-position and varied-position images. The changes in head position impacted the coordinates of the anatomical landmarks in three-dimensional images. However, reorientation using image analysis software allowed accurate superimposition onto the reference positions.

  10. Toward Confirming a Framework for Securing the Virtual Machine Image in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Raid Khalid Hussein

    2017-04-01

    Full Text Available The concept of cloud computing has arisen thanks to academic work in the fields of utility computing, distributed computing, virtualisation, and web services. By using cloud computing, which can be accessed from anywhere, newly-launched businesses can minimise their start-up costs. Among the most important notions when it comes to the construction of cloud computing is virtualisation. While this concept brings its own security risks, these risks are not necessarily related to the cloud. The main disadvantage of using cloud computing is linked to safety and security. This is because anybody which chooses to employ cloud computing will use someone else’s hard disk and CPU in order to sort and store data. In cloud environments, a great deal of importance is placed on guaranteeing that the virtual machine image is safe and secure. Indeed, a previous study has put forth a framework with which to protect the virtual machine image in cloud computing. As such, the present study is primarily concerned with confirming this theoretical framework so as to ultimately secure the virtual machine image in cloud computing. This will be achieved by carrying out interviews with experts in the field of cloud security.

  11. Enhancements to the Image Analysis Tool for Core Punch Experiments and Simulations (vs. 2014)

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, John Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Unal, Cetin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-06

    A previous paper (Hogden & Unal, 2012, Image Analysis Tool for Core Punch Experiments and Simulations) described an image processing computer program developed at Los Alamos National Laboratory. This program has proven useful so developement has been continued. In this paper we describe enhacements to the program as of 2014.

  12. The construction and evaluation of a prototype system for an image intensifier-based volume computed tomography imager

    International Nuclear Information System (INIS)

    Ning, R.

    1989-01-01

    A volumetric reconstruction of a three-dimensional (3-D) object has been at the forefront of exploration in medical applications for a long time. To achieve this goal, a prototype system for an image intensifier(II)-based volume computed tomography (CT) imager has been constructed. This research has been concerned with constructing and evaluating such a prototype system by phantom studies. The prototype system consists of a fixed x-ray tube, a specially designed aluminum filter that will reduce the dynamic range of projection data, an antiscatter grid, a conventional image intensifier optically coupled to a charge-coupled device (CCC) camera, a computer controlled turntable on which phantoms are placed, a digital computer including an A/D converter and a graphic station that displays the reconstructed images. In this study, three different phantoms were used: a vascular phantom, a resolution phantom and a Humanoid reg-sign chest phantom. The direct 3-D reconstruction from the projections was performed using a cone beam algorithm and vascular reconstruction algorithms. The image performance of the system for the direct 3-D reconstruction was evaluated. The spatial resolution limits of the system were estimated through observing the reconstructed images of the resolution phantom. By observing the images reconstructed from the projections, it can be determined that the image performance of the prototype system for a direct 3-D reconstruction is reasonably good and that the vascular reconstruction algorithms work very well. The results also indicate that the 3-D reconstructions obtained with the 11-based volume CT imager have nearly equally good resolution in x, y and z directions and are superior to a conventional CT in the resolution of the z direction

  13. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  14. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems.

    Science.gov (United States)

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C M A; Saltz, Joel

    2017-09-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies.

  15. A novel method to measure conspicuous facial pores using computer analysis of digital-camera-captured images: the effect of glycolic acid chemical peeling.

    Science.gov (United States)

    Kakudo, Natsuko; Kushida, Satoshi; Tanaka, Nobuko; Minakata, Tatsuya; Suzuki, Kenji; Kusumoto, Kenji

    2011-11-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological esthetic surgery. Conspicuous facial pores are one of the most frequently encountered skin problems in women of all ages. This study was performed to analyze the effectiveness of reducing conspicuous facial pores using glycolic acid chemical peeling (GACP) based on a novel computer analysis of digital-camera-captured images. GACP was performed a total of five times at 2-week intervals in 22 healthy women. Computerized image analysis of conspicuous, open, and darkened facial pores was performed using the Robo Skin Analyzer CS 50. The number of conspicuous facial pores decreased significantly in 19 (86%) of the 22 subjects, with a mean improvement rate of 34.6%. The number of open pores decreased significantly in 16 (72%) of the subjects, with a mean improvement rate of 11.0%. The number of darkened pores decreased significantly in 18 (81%) of the subjects, with a mean improvement rate of 34.3%. GACP significantly reduces the number of conspicuous facial pores. The Robo Skin Analyzer CS 50 is useful for the quantification and analysis of 'pore enlargement', a subtle finding in dermatological esthetic surgery. © 2011 John Wiley & Sons A/S.

  16. Parameters related to the image quality in computed tomography -CT

    International Nuclear Information System (INIS)

    Alonso, T.C.; Silva, T.A.; Mourão, A.P.; Silva, T.A.

    2015-01-01

    Quality control programs in computed tomography, CT, should be continuously reviewed to always ensure the best image quality with the lowest possible dose for the patient in the diagnostic process. The quality control in CT aims to design and implement a set of procedures that allows the verification of their operating conditions within the specified requirements for its use. In Brazil, the Ministry of Health (MOH), the Technical Rules (Resolution NE in 1016.) - Radiology Medical - 'Equipment and Safety Performance' establishes a reference to the analysis of tests on TC. A large number of factors such as image noise, slice thickness (resolution of the Z axis), low contrast resolution and high contrast resolution and the radiation dose can be affected by the selection of technical parameters in exams. The purpose of this study was to investigate how changes in image acquisition protocols modify its quality and determine the advantages and disadvantages between the different aspects of image quality, especially the reduction of patient radiation dose. A preliminary procedure is to check the operating conditions of the CT measurements were performed on a scanner with 64-MDCT scanner (GE Healthcare, BrightSpeed) in the service of the Molecular Imaging Center (Cimol) of the Federal University of Minas Gerais (UFMG). When performing the image quality tests we used a simulator, Catphan-600, this device has five modules, and in each you can perform a series of tests. Different medical imaging practices have different requirements for acceptable image quality. The results of quality control tests showed that the analyzed equipment is in accordance with the requirements established by current regulations. [pt

  17. Mathematics of shape description a morphological approach to image processing and computer graphics

    CERN Document Server

    Ghosh, Pijush K

    2009-01-01

    Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book.  Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...

  18. Computer-assisted imaging of the fetus with magnetic resonance imaging.

    Science.gov (United States)

    Colletti, P M

    1996-01-01

    The purpose of this paper is to review the use of magnetic resonance imaging (MRI) of the fetus and to propose future techniques and applications. Institutional review board approved MR images of the fetus were acquired in 66 patients with sonographically suspected fetal abnormalities. Axial, coronal, and sagittal short TR, short TE images were obtained. In addition, 12 studies were performed with rapid scans requiring 700-1200 ms using either GRASS or Spoiled GRASS techniques. Sequential studies demonstrating fetal motion were also performed. Three studies with 3D IR prepped GRASS were performed. These allowed for orthogonal and non-orthogonal reformatted views and 3D display. Normal fetal structures were shown with MRI, including brain, heart, liver, stomach, intestines, and bladder. Gross fetal anomalies could generally be demonstrated with MRI. MRI may give additional information to that of sonography in fetal anomalies, particularly those involving the central nervous system, and in the detection of fat, blood, and meconium. MRI of the fetus can demonstrate normal and abnormal structures. Newer techniques with faster imaging will allow for greater possibility of computer assisted manipulation of data.

  19. Comparison of personal computer with CT workstation in the evaluation of 3-dimensional CT image of the skull

    International Nuclear Information System (INIS)

    Kang, Bok Hee; Kim, Kee Deog; Park, Chang Seo

    2001-01-01

    To evaluate the usefulness of the reconstructed 3-dimensional image on the personal computer in comparison with that of the CT workstation by quantitative comparison and analysis. The spiral CT data obtained from 27 persons were transferred from the CT workstation to a personal computer, and they were reconstructed as 3-dimensional image on the personal computer using V-works 2.0 TM . One observer obtained the 14 measurements on the reconstructed 3-dimensional image on both the CT workstation and the personal computer. Paired test was used to evaluate the intraobserver difference and the mean value of the each measurement on the CT workstation and the personal computer. Pearson correlation analysis and % imcongruence were also performed. I-Gn, N-Gn, N-A, N-Ns, B-A and G-Op did not show any statistically significant difference (p>0.05), B-O, B-N, Eu-Eu, Zy-Zy, Biw, D-D, Orbrd R, and L had statistically significant difference (p<0.05), but the mean values of the differences of all measurements were below 2 mm, except for D-D. The value of correlation coefficient γ was greater than 0.95 at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and it was 0.75 at B-O, 0.78 at D-D, and 0.82 at both Orbrb R and L. The % incongruence was below 4% at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and 7.18%, 10.78%, 4.97%, 5.89% at B-O, D-D, Orbrb R and L respectively. It can be considered that the utilization of the personal computer has great usefulness in reconstruction of the 3-dimensional image when it comes to the economics, accessibility and convenience, except for thin bones and the landmarks which and difficult to be located

  20. Proton computed tomography images with algebraic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Bruzzi, M. [Physics and Astronomy Department, University of Florence, Florence (Italy); Civinini, C.; Scaringella, M. [INFN - Florence Division, Florence (Italy); Bonanno, D. [INFN - Catania Division, Catania (Italy); Brianzi, M. [INFN - Florence Division, Florence (Italy); Carpinelli, M. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Cirrone, G.A.P.; Cuttone, G. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Presti, D. Lo [INFN - Catania Division, Catania (Italy); Physics and Astronomy Department, University of Catania, Catania (Italy); Maccioni, G. [INFN – Cagliari Division, Cagliari (Italy); Pallotta, S. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Randazzo, N. [INFN - Catania Division, Catania (Italy); Romano, F. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Sipala, V. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Talamonti, C. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Vanzi, E. [Fisica Sanitaria, Azienda Ospedaliero-Universitaria Senese, Siena (Italy)

    2017-02-11

    A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to ~1% and spatial resolutions <1 mm, achieved within processing times of ~15′ for a 512×512 pixels image prove that this technique will be beneficial if used instead of X-CT in hadron-therapy.

  1. Computing Challenges in Coded Mask Imaging

    Science.gov (United States)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  2. Development of computational small animal models and their applications in preclinical imaging and therapy research.

    Science.gov (United States)

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  3. Development of computational small animal models and their applications in preclinical imaging and therapy research

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Tianwu [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211 (Switzerland); Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211 (Switzerland); Geneva Neuroscience Center, Geneva University, Geneva CH-1205 (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen 9700 RB (Netherlands)

    2016-01-15

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  4. Development of computational small animal models and their applications in preclinical imaging and therapy research

    International Nuclear Information System (INIS)

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future

  5. Fluorescence In Situ Hybridization (FISH Signal Analysis Using Automated Generated Projection Images

    Directory of Open Access Journals (Sweden)

    Xingwei Wang

    2012-01-01

    Full Text Available Fluorescence in situ hybridization (FISH tests provide promising molecular imaging biomarkers to more accurately and reliably detect and diagnose cancers and genetic disorders. Since current manual FISH signal analysis is low-efficient and inconsistent, which limits its clinical utility, developing automated FISH image scanning systems and computer-aided detection (CAD schemes has been attracting research interests. To acquire high-resolution FISH images in a multi-spectral scanning mode, a huge amount of image data with the stack of the multiple three-dimensional (3-D image slices is generated from a single specimen. Automated preprocessing these scanned images to eliminate the non-useful and redundant data is important to make the automated FISH tests acceptable in clinical applications. In this study, a dual-detector fluorescence image scanning system was applied to scan four specimen slides with FISH-probed chromosome X. A CAD scheme was developed to detect analyzable interphase cells and map the multiple imaging slices recorded FISH-probed signals into the 2-D projection images. CAD scheme was then applied to each projection image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm, identify FISH-probed signals using a top-hat transform, and compute the ratios between the normal and abnormal cells. To assess CAD performance, the FISH-probed signals were also independently visually detected by an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots in four testing samples. The study demonstrated the feasibility of automated FISH signal analysis that applying a CAD scheme to the automated generated 2-D projection images.

  6. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    Science.gov (United States)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  7. A low-cost vector processor boosting compute-intensive image processing operations

    Science.gov (United States)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  8. Quantitative co-localization and pattern analysis of endo-lysosomal cargo in subcellular image cytometry and validation on synthetic image sets

    DEFF Research Database (Denmark)

    Lund, Frederik W.; Wüstner, Daniel

    2017-01-01

    /LYSs. Analysis of endocytic trafficking relies heavily on quantitative fluorescence microscopy, but evaluation of the huge image data sets is challenging and demands computer-assisted statistical tools. Here, we describe how to use SpatTrack (www.sdu.dk/bmb/spattrack), an imaging toolbox, which we developed...... such synthetic vesicle patterns as “ground truth” for validation of two-channel analysis tools in SpatTrack, revealing their high reliability. An improved version of SpatTrack for microscopy-based quantification of cargo transport through the endo-lysosomal system accompanies this protocol....

  9. The first step toward diagnosing female genital schistosomiasis by computer image analysis

    DEFF Research Database (Denmark)

    Holmen, Sigve Dhondup; Kleppa, Elisabeth; Lillebø, Kristine

    2015-01-01

    Schistosoma haematobium causes female genital schistosomiasis (FGS), which is a poverty-related disease in sub-Saharan Africa. Furthermore, it is co-endemic with human immunodeficiency virus (HIV), and biopsies from genital lesions may expose the individual to increased risk of HIV infection...... statistics, we estimate that the computer color analysis yields a sensitivity of 80.5% and a specificity of 66.2% for the diagnosis of FGS....

  10. Data of NODDI diffusion metrics in the brain and computer simulation of hybrid diffusion imaging (HYDI acquisition scheme

    Directory of Open Access Journals (Sweden)

    Chandana Kodiweera

    2016-06-01

    Full Text Available This article provides NODDI diffusion metrics in the brains of 52 healthy participants and computer simulation data to support compatibility of hybrid diffusion imaging (HYDI, “Hybrid diffusion imaging” [1] acquisition scheme in fitting neurite orientation dispersion and density imaging (NODDI model, “NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain” [2]. HYDI is an extremely versatile diffusion magnetic resonance imaging (dMRI technique that enables various analyzes methods using a single diffusion dataset. One of the diffusion data analysis methods is the NODDI computation, which models the brain tissue with three compartments: fast isotropic diffusion (e.g., cerebrospinal fluid, anisotropic hindered diffusion (e.g., extracellular space, and anisotropic restricted diffusion (e.g., intracellular space. The NODDI model produces microstructural metrics in the developing brain, aging brain or human brain with neurologic disorders. The first dataset provided here are the means and standard deviations of NODDI metrics in 48 white matter region-of-interest (ROI averaging across 52 healthy participants. The second dataset provided here is the computer simulation with initial conditions guided by the first dataset as inputs and gold standard for model fitting. The computer simulation data provide a direct comparison of NODDI indices computed from the HYDI acquisition [1] to the NODDI indices computed from the originally proposed acquisition [2]. These data are related to the accompanying research article “Age Effects and Sex Differences in Human Brain White Matter of Young to Middle-Aged Adults: A DTI, NODDI, and q-Space Study” [3].

  11. Performance of computer-aided diagnosis for detection of lacunar infarcts on brain MR images: ROC analysis of radiologists' detection

    International Nuclear Information System (INIS)

    Uchiyama, Y.; Yokoyama, R.; Hara, T.; Fujita, H.; Asano, T.; Kato, H.; Hoshi, H.; Yamakawa, H.; Iwama, T.; Ando, H.; Yamakawa, H.

    2007-01-01

    The detection and management of asymptomatic lacunar infarcts on magnetic resonance (MR) images are important tasks for radiologists to ensure the prevention of sever cerebral infarctions. However, accurate identification of lacunar infarcts is a difficult. Therefore, we developed a computer-aided diagnosis (CAD) scheme for detection of lacunar infarcts. The purpose of this study was to evaluate radiologists' performance in detection of lacunar infarcts without and with use of CAD scheme. 30 T1- and 30 T2- weighted images obtained from 30 patients were used for an observer study, which were consisted of 15 cases with a single lacunar infarct and 15 cases without any lacunar infarct. Six radiologists participated in the observer study. They interpreted lacunar infarcts first without and then with use of the scheme. For all six observers, average area under the receiver operating characteristic curve value was increased from 0.920 to 0.965 when they used the computer output. This CAD scheme might have the potential to improve the accuracy of radiologists' performance in the detection of lacunar infarcts on MR images. (orig.)

  12. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    International Nuclear Information System (INIS)

    Morrow, Natalya V.; Lawton, Colleen A.; Qi, X. Sharon; Li, X. Allen

    2012-01-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  13. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    Energy Technology Data Exchange (ETDEWEB)

    Morrow, Natalya V.; Lawton, Colleen A. [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Qi, X. Sharon [Department of Radiation Oncology, University of Colorado Denver, Denver, Colorado (United States); Li, X. Allen, E-mail: ali@mcw.edu [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States)

    2012-04-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  14. Computer image analysis of seed shape and seed color for flax cultivar description

    Czech Academy of Sciences Publication Activity Database

    Wiesnerová, Dana; Wiesner, Ivo

    2008-01-01

    Roč. 61, č. 2 (2008), s. 126-135 ISSN 0168-1699 R&D Projects: GA ČR GA521/03/0019 Institutional research plan: CEZ:AV0Z50510513 Keywords : image analysis * cultivar description * flax Subject RIV: EA - Cell Biology Impact factor: 1.273, year: 2008

  15. Correlative neuroanatomy of computed tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Groot, J.

    1984-01-01

    Since the development of computed tomography (CT) more than a decade ago, still another form of imaging has become available that provides displays of normal and abnormal human structures. Magnetic resonance imaging is given complete coverage in this book. It describes both CT and MR anatomy that explains basic principles and the current status of imaging the brain and spine. The author uses three-dimensional concepts to provide the reader with a simple means to compare the main structures of the brain, skull and spine. Combining normal, gross neuroanatomic illustrations with CT and MR images of normal and abnormal conditions, the book provides diagnostic guidance. Drawings, photographs and radiologic images are used to help

  16. Analysis of 3D Prints by X-ray Computed Microtomography and Terahertz Pulsed Imaging

    DEFF Research Database (Denmark)

    Markl, Daniel; Zeitler, J Axel; Rasch, Cecilie

    2017-01-01

    PURPOSE: A 3D printer was used to realise compartmental dosage forms containing multiple active pharmaceutical ingredient (API) formulations. This work demonstrates the microstructural characterisation of 3D printed solid dosage forms using X-ray computed microtomography (XμCT) and terahertz pulsed...... imaging (TPI). METHODS: Printing was performed with either polyvinyl alcohol (PVA) or polylactic acid (PLA). The structures were examined by XμCT and TPI. Liquid self-nanoemulsifying drug delivery system (SNEDDS) formulations containing saquinavir and halofantrine were incorporated into the 3D printed...... was characterised by XμCT and TPI on the basis of the computer-aided design (CAD) models of the dosage form (compartmentalised PVA structures were 7.5 ± 0.75% larger than designed; n = 3). CONCLUSIONS: The 3D printer can reproduce specific structures very accurately, whereas the 3D prints can deviate from...

  17. Computerized image analysis: estimation of breast density on mammograms

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    2000-06-01

    An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.

  18. Morphometric analysis - Cone beam computed tomography to predict bone quality and quantity.

    Science.gov (United States)

    Hohlweg-Majert, B; Metzger, M C; Kummer, T; Schulze, D

    2011-07-01

    Modified quantitative computed tomography is a method used to predict bone quality and quantify the bone mass of the jaw. The aim of this study was to determine whether bone quantity or quality was detected by cone beam computed tomography (CBCT) combined with image analysis. MATERIALS AND PROCEDURES: Different measurements recorded on two phantoms (Siemens phantom, Comac phantom) were evaluated on images taken with the Somatom VolumeZoom (Siemens Medical Solutions, Erlangen, Germany) and the NewTom 9000 (NIM s.r.l., Verona, Italy) in order to calculate a calibration curve. The spatial relationships of six sample cylinders and the repositioning from four pig skull halves relative to adjacent defined anatomical structures were assessed by means of three-dimensional visualization software. The calibration curves for computer tomography (CT) and cone beam computer tomography (CBCT) using the Siemens phantom showed linear correlation in both modalities between the Hounsfield Units (HU) and bone morphology. A correction factor for CBCT was calculated. Exact information about the micromorphology of the bone cylinders was only available using of micro computer tomography. Cone-beam computer tomography is a suitable choice for analysing bone mass, but, it does not give any information about bone quality. 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  19. Image Guided Radiation Therapy Using Synthetic Computed Tomography Images in Brain Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Price, Ryan G. [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan (United States); Kim, Joshua P.; Zheng, Weili [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Chetty, Indrin J. [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan (United States); Glide-Hurst, Carri, E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan (United States)

    2016-07-15

    Purpose: The development of synthetic computed tomography (CT) (synCT) derived from magnetic resonance (MR) images supports MR-only treatment planning. We evaluated the accuracy of synCT and synCT-generated digitally reconstructed radiographs (DRRs) relative to CT and determined their performance for image guided radiation therapy (IGRT). Methods and Materials: Magnetic resonance simulation (MR-SIM) and CT simulation (CT-SIM) images were acquired of an anthropomorphic skull phantom and 12 patient brain cancer cases. SynCTs were generated using fluid attenuation inversion recovery, ultrashort echo time, and Dixon data sets through a voxel-based weighted summation of 5 tissue classifications. The DRRs were generated from the phantom synCT, and geometric fidelity was assessed relative to CT-generated DRRs through bounding box and landmark analysis. An offline retrospective analysis was conducted to register cone beam CTs (n=34) to synCTs and CTs using automated rigid registration in the treatment planning system. Planar MV and KV images (n=37) were rigidly registered to synCT and CT DRRs using an in-house script. Planar and volumetric registration reproducibility was assessed and margin differences were characterized by the van Herk formalism. Results: Bounding box and landmark analysis of phantom synCT DRRs were within 1 mm of CT DRRs. Absolute planar registration shift differences ranged from 0.0 to 0.7 mm for phantom DRRs on all treatment platforms and from 0.0 to 0.4 mm for volumetric registrations. For patient planar registrations, the mean shift differences were 0.4 ± 0.5 mm (range, −0.6 to 1.6 mm), 0.0 ± 0.5 mm (range, −0.9 to 1.2 mm), and 0.1 ± 0.3 mm (range, −0.7 to 0.6 mm) for the superior-inferior (S-I), left-right (L-R), and anterior-posterior (A-P) axes, respectively. The mean shift differences in volumetric registrations were 0.6 ± 0.4 mm (range, −0.2 to 1.6 mm), 0.2 ± 0.4 mm (range, −0.3 to 1.2 mm), and 0.2 ± 0

  20. Image Guided Radiation Therapy Using Synthetic Computed Tomography Images in Brain Cancer

    International Nuclear Information System (INIS)

    Price, Ryan G.; Kim, Joshua P.; Zheng, Weili; Chetty, Indrin J.; Glide-Hurst, Carri

    2016-01-01

    Purpose: The development of synthetic computed tomography (CT) (synCT) derived from magnetic resonance (MR) images supports MR-only treatment planning. We evaluated the accuracy of synCT and synCT-generated digitally reconstructed radiographs (DRRs) relative to CT and determined their performance for image guided radiation therapy (IGRT). Methods and Materials: Magnetic resonance simulation (MR-SIM) and CT simulation (CT-SIM) images were acquired of an anthropomorphic skull phantom and 12 patient brain cancer cases. SynCTs were generated using fluid attenuation inversion recovery, ultrashort echo time, and Dixon data sets through a voxel-based weighted summation of 5 tissue classifications. The DRRs were generated from the phantom synCT, and geometric fidelity was assessed relative to CT-generated DRRs through bounding box and landmark analysis. An offline retrospective analysis was conducted to register cone beam CTs (n=34) to synCTs and CTs using automated rigid registration in the treatment planning system. Planar MV and KV images (n=37) were rigidly registered to synCT and CT DRRs using an in-house script. Planar and volumetric registration reproducibility was assessed and margin differences were characterized by the van Herk formalism. Results: Bounding box and landmark analysis of phantom synCT DRRs were within 1 mm of CT DRRs. Absolute planar registration shift differences ranged from 0.0 to 0.7 mm for phantom DRRs on all treatment platforms and from 0.0 to 0.4 mm for volumetric registrations. For patient planar registrations, the mean shift differences were 0.4 ± 0.5 mm (range, −0.6 to 1.6 mm), 0.0 ± 0.5 mm (range, −0.9 to 1.2 mm), and 0.1 ± 0.3 mm (range, −0.7 to 0.6 mm) for the superior-inferior (S-I), left-right (L-R), and anterior-posterior (A-P) axes, respectively. The mean shift differences in volumetric registrations were 0.6 ± 0.4 mm (range, −0.2 to 1.6 mm), 0.2 ± 0.4 mm (range, −0.3 to 1.2 mm), and 0.2 ± 0

  1. Off-line data processing and display for computed tomographic images (EMI brain)

    International Nuclear Information System (INIS)

    Takizawa, Masaomi; Maruyama, Kiyoshi; Yano, Kesato; Takenaka, Eiichi.

    1978-01-01

    Processing and multi-format display for the CT (EMI) scan data have been tried by using an off-line small computer and an analog memory. Four or six CT images after processing are displayed on the CRT by a small computer with a 16 kilo-words core memory and an analog memory. Multi-format display of the CT image can be selected as follows; multi-slice display, continuative multi-window display, separate multi-window display, and multi-window level display. Electronic zooming for the real size viewing can give magnified CT image with one of displayed images if necessary. Image substraction, edge enhancement, smoothing, non-linear gray scale display, and synthesized image for the plane tomography reconstracted by the normal CT scan data, have been tried by the off-line data processing. A possibility for an effective application of the data base with CT image was obtained by these trials. (auth.)

  2. Image Analysis for X-ray Imaging of Food

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    for quality and safety evaluation of food products. In this effort the fields of statistics, image analysis and statistical learning are combined, to provide analytical tools for determining the aforementioned food traits. The work demonstrated includes a quantitative analysis of heat induced changes......X-ray imaging systems are increasingly used for quality and safety evaluation both within food science and production. They offer non-invasive and nondestructive penetration capabilities to image the inside of food. This thesis presents applications of a novel grating-based X-ray imaging technique...... and defect detection in food. Compared to the complex three dimensional analysis of microstructure, here two dimensional images are considered, making the method applicable for an industrial setting. The advantages obtained by grating-based imaging are compared to conventional X-ray imaging, for both foreign...

  3. CMEIAS color segmentation: an improved computing technology to process color images for quantitative microbial ecology studies at single-cell resolution.

    Science.gov (United States)

    Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B

    2010-02-01

    Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most

  4. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    International Nuclear Information System (INIS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-01-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical

  5. Trident: scalable compute archives: workflows, visualization, and analysis

    Science.gov (United States)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  6. Quantitative analysis of computed tomography images and early detection of cerebral edema for pediatric traumatic brain injury patients: retrospective study.

    Science.gov (United States)

    Kim, Hakseung; Kim, Gwang-dong; Yoon, Byung C; Kim, Keewon; Kim, Byung-Jo; Choi, Young Hun; Czosnyka, Marek; Oh, Byung-Mo; Kim, Dong-Joo

    2014-10-22

    The purpose of this study was to identify whether the distribution of Hounsfield Unit (HU) values across the intracranial area in computed tomography (CT) images can be used as an effective diagnostic tool for determining the severity of cerebral edema in pediatric traumatic brain injury (TBI) patients. CT images, medical records and radiology reports on 70 pediatric patients were collected. Based on radiology reports and the Marshall classification, the patients were grouped as mild edema patients (n=37) or severe edema patients (n=33). Automated quantitative analysis using unenhanced CT images was applied to eliminate artifacts and identify the difference in HU value distribution across the intracranial area between these groups. The proportion of pixels with HU=17 to 24 was highly correlated with the existence of severe cerebral edema (P<0.01). This proportion was also able to differentiate patients who developed delayed cerebral edema from mild TBI patients. A significant difference between deceased patients and surviving patients in terms of the HU distribution came from the proportion of pixels with HU=19 to HU=23 (P<0.01). The proportion of pixels with an HU value of 17 to 24 in the entire cerebral area of a non-enhanced CT image can be an effective basis for evaluating the severity of cerebral edema. Based on this result, we propose a novel approach for the early detection of severe cerebral edema.

  7. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    Science.gov (United States)

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.

  8. Computer processing of image captured by the passive THz imaging device as an effective tool for its de-noising

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.; Zhang, Cun-lin; Deng, Chao; Zhao, Yuan-meng; Zhang, Xin

    2012-12-01

    As it is well-known, passive THz imaging devices have big potential for solution of the security problem. Nevertheless, one of the main problems, which take place on the way of using these devices, consists in the low image quality of developed passive THz camera. To change this situation, it is necessary to improve the engineering characteristics (resolution, sensitivity and so on) of the THz camera or to use computer processing of the image. In our opinion, the last issue is more preferable because it is more inexpensive. Below we illustrate possibility of suppression of the noise of the image captured by three THz passive camera developed in CNU (Beijing. China). After applying the computer processing of the image, its quality enhances many times. Achieved quality in many cases becomes enough for the detection of the object hidden under opaque clothes. We stress that the performance of developed computer code is enough high and does not restrict the performance of passive THz imaging device. The obtained results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem. Nevertheless, developing the new spatial filter for treatment of the THz image remains a modern problem at present time.

  9. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  10. Processed images in human perception: A case study in ultrasound breast imaging

    International Nuclear Information System (INIS)

    Yap, Moi Hoon; Edirisinghe, Eran; Bez, Helmut

    2010-01-01

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  11. Intranasal dexmedetomidine for sedation for pediatric computed tomography imaging.

    Science.gov (United States)

    Mekitarian Filho, Eduardo; Robinson, Fay; de Carvalho, Werther Brunow; Gilio, Alfredo Elias; Mason, Keira P

    2015-05-01

    This prospective observational pilot study evaluated the aerosolized intranasal route for dexmedetomidine as a safe, effective, and efficient option for infant and pediatric sedation for computed tomography imaging. The mean time to sedation was 13.4 minutes, with excellent image quality, no failed sedations, or significant adverse events. Registered with ClinicalTrials.gov: NCT01900405. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Validation of tumor protein marker quantification by two independent automated immunofluorescence image analysis platforms

    Science.gov (United States)

    Peck, Amy R; Girondo, Melanie A; Liu, Chengbao; Kovatich, Albert J; Hooke, Jeffrey A; Shriver, Craig D; Hu, Hai; Mitchell, Edith P; Freydin, Boris; Hyslop, Terry; Chervoneva, Inna; Rui, Hallgeir

    2016-01-01

    Protein marker levels in formalin-fixed, paraffin-embedded tissue sections traditionally have been assayed by chromogenic immunohistochemistry and evaluated visually by pathologists. Pathologist scoring of chromogen staining intensity is subjective and generates low-resolution ordinal or nominal data rather than continuous data. Emerging digital pathology platforms now allow quantification of chromogen or fluorescence signals by computer-assisted image analysis, providing continuous immunohistochemistry values. Fluorescence immunohistochemistry offers greater dynamic signal range than chromogen immunohistochemistry, and combined with image analysis holds the promise of enhanced sensitivity and analytic resolution, and consequently more robust quantification. However, commercial fluorescence scanners and image analysis software differ in features and capabilities, and claims of objective quantitative immunohistochemistry are difficult to validate as pathologist scoring is subjective and there is no accepted gold standard. Here we provide the first side-by-side validation of two technologically distinct commercial fluorescence immunohistochemistry analysis platforms. We document highly consistent results by (1) concordance analysis of fluorescence immunohistochemistry values and (2) agreement in outcome predictions both for objective, data-driven cutpoint dichotomization with Kaplan–Meier analyses or employment of continuous marker values to compute receiver-operating curves. The two platforms examined rely on distinct fluorescence immunohistochemistry imaging hardware, microscopy vs line scanning, and functionally distinct image analysis software. Fluorescence immunohistochemistry values for nuclear-localized and tyrosine-phosphorylated Stat5a/b computed by each platform on a cohort of 323 breast cancer cases revealed high concordance after linear calibration, a finding confirmed on an independent 382 case cohort, with concordance correlation coefficients >0

  13. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques.

    Science.gov (United States)

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.

  14. Fractal analyses of osseous healing using Tuned Aperture Computed Tomography images

    International Nuclear Information System (INIS)

    Nair, M.K.; Nair, U.P.; Seyedain, A.; Webber, R.L.; Piesco, N.P.; Agarwal, S.; Mooney, M.P.; Groendahl, H.G.

    2001-01-01

    The aim of this study was to evaluate osseous healing in mandibular defects using fractal analyses on conventional radiographs and tuned aperture computed tomography (TACT; OrthoTACT, Instrumentarium Imaging, Helsinki, Finland) images. Eighty test sites on the inferior margins of rabbit mandibles were subject to lesion induction and treated with one of the following: no treatment (controls); osteoblasts only; polymer matrix only; or osteoblast-polymer matrix (OPM) combination. Images were acquired using conventional radiography and TACT, including unprocessed TACT (TACT-U) and iteratively restored TACT (TACT-IR). Healing was followed up over time and images acquired at 3, 6, 9, and 12 weeks post-surgery. Fractal dimension (FD) was computed within regions of interest in the defects using the TACT workbench. Results were analyzed for effects produced by imaging modality, treatment modality, time after surgery and lesion location. Histomorphometric data were available to assess ground truth. Significant differences (p<0.0001) were noted based on imaging modality with TACT-IR recording the highest mean fractal dimension (MFD), followed by TACT-U and conventional images, in that order. Sites treated with OPM recorded the highest MFDs among all treatment modalities (p<0.0001). The highest MFD based on time was recorded at 3 weeks and differed significantly with 12 weeks (p<0.035). Correlation of FD with results of histomorphometric data was high (r=0.79; p<0.001). The FD computed on TACT-IR showed the highest correlation with histomorphometric data, thus establishing the fact TACT is a more efficient and accurate imaging modality for quantification of osseous changes within healing bony defects. (orig.)

  15. Application of computed tomography virtual noncontrast spectral imaging in evaluation of hepatic metastases: a preliminary study.

    Science.gov (United States)

    Tian, Shi-Feng; Liu, Ai-Lian; Liu, Jing-Hong; Sun, Mei-Yu; Wang, He-Qing; Liu, Yi-Jun

    2015-03-05

    The objective was to qualitatively and quantitatively evaluate hepatic metastases using computed tomography (CT) virtual noncontrast (VNC) spectral imaging in a retrospective analysis. Forty hepatic metastases patients underwent CT scans including the conventional true noncontrast (TNC) and the tri-phasic contrast-enhanced dual energy spectral scans in the hepatic arterial, portal venous, and equilibrium phases. The tri-phasic spectral CT images were used to obtain three groups of VNC images including in the arterial (VNCa), venous (VNCv), and equilibrium (VNCe) phase by the material decomposition process using water and iodine as a base material pair. The image quality and the contrast-to-noise ratio (CNR) of metastasis of the four groups were compared with ANOVA analysis. The metastasis detection rates with the four nonenhanced image groups were calculated and compared using the Chi-square test. There were no significant differences in image quality among TNC, VNCa and VNCv images (P > 0.05). The quality of VNCe images was significantly worse than that of other three groups (P 0.05). The metastasis detection rate of the four nonenhanced groups with no statistically significant difference (P > 0.05). The quality of VNCa and VNCv images is identical to that of TNC images, and the metastasis detection rate in VNC images is similar to that in TNC images. VNC images obtained from arterial phase show metastases more clearly. Thus, VNCa imaging may be a surrogate to TNC imaging in hepatic metastasis diagnosis.

  16. Application of Computed Tomography Virtual Noncontrast Spectral Imaging in Evaluation of Hepatic Metastases: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Shi-Feng Tian

    2015-01-01

    Full Text Available Objective: The objective was to qualitatively and quantitatively evaluate hepatic metastases using computed tomography (CT virtual noncontrast (VNC spectral imaging in a retrospective analysis. Methods: Forty hepatic metastases patients underwent CT scans including the conventional true noncontrast (TNC and the tri-phasic contrast-enhanced dual energy spectral scans in the hepatic arterial, portal venous, and equilibrium phases. The tri-phasic spectral CT images were used to obtain three groups of VNC images including in the arterial (VNCa, venous (VNCv, and equilibrium (VNCe phase by the material decomposition process using water and iodine as a base material pair. The image quality and the contrast-to-noise ratio (CNR of metastasis of the four groups were compared with ANOVA analysis. The metastasis detection rates with the four nonenhanced image groups were calculated and compared using the Chi-square test. Results: There were no significant differences in image quality among TNC, VNCa and VNCv images (P > 0.05. The quality of VNCe images was significantly worse than that of other three groups (P 0.05. The metastasis detection rate of the four nonenhanced groups with no statistically significant difference (P > 0.05. Conclusions: The quality of VNCa and VNCv images is identical to that of TNC images, and the metastasis detection rate in VNC images is similar to that in TNC images. VNC images obtained from arterial phase show metastases more clearly. Thus, VNCa imaging may be a surrogate to TNC imaging in hepatic metastasis diagnosis.

  17. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  18. Large-scale computations on histology images reveal grade-differentiating parameters for breast cancer

    Directory of Open Access Journals (Sweden)

    Katsinis Constantine

    2006-10-01

    Full Text Available Abstract Background Tumor classification is inexact and largely dependent on the qualitative pathological examination of the images of the tumor tissue slides. In this study, our aim was to develop an automated computational method to classify Hematoxylin and Eosin (H&E stained tissue sections based on cancer tissue texture features. Methods Image processing of histology slide images was used to detect and identify adipose tissue, extracellular matrix, morphologically distinct cell nuclei types, and the tubular architecture. The texture parameters derived from image analysis were then applied to classify images in a supervised classification scheme using histologic grade of a testing set as guidance. Results The histologic grade assigned by pathologists to invasive breast carcinoma images strongly correlated with both the presence and extent of cell nuclei with dispersed chromatin and the architecture, specifically the extent of presence of tubular cross sections. The two parameters that differentiated tumor grade found in this study were (1 the number density of cell nuclei with dispersed chromatin and (2 the number density of tubular cross sections identified through image processing as white blobs that were surrounded by a continuous string of cell nuclei. Classification based on subdivisions of a whole slide image containing a high concentration of cancer cell nuclei consistently agreed with the grade classification of the entire slide. Conclusion The automated image analysis and classification presented in this study demonstrate the feasibility of developing clinically relevant classification of histology images based on micro- texture. This method provides pathologists an invaluable quantitative tool for evaluation of the components of the Nottingham system for breast tumor grading and avoid intra-observer variability thus increasing the consistency of the decision-making process.

  19. Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology.

    Science.gov (United States)

    Limkin, E J; Sun, R; Dercle, L; Zacharaki, E I; Robert, C; Reuzé, S; Schernberg, A; Paragios, N; Deutsch, E; Ferté, C

    2017-06-01

    Medical image processing and analysis (also known as Radiomics) is a rapidly growing discipline that maps digital medical images into quantitative data, with the end goal of generating imaging biomarkers as decision support tools for clinical practice. The use of imaging data from routine clinical work-up has tremendous potential in improving cancer care by heightening understanding of tumor biology and aiding in the implementation of precision medicine. As a noninvasive method of assessing the tumor and its microenvironment in their entirety, radiomics allows the evaluation and monitoring of tumor characteristics such as temporal and spatial heterogeneity. One can observe a rapid increase in the number of computational medical imaging publications-milestones that have highlighted the utility of imaging biomarkers in oncology. Nevertheless, the use of radiomics as clinical biomarkers still necessitates amelioration and standardization in order to achieve routine clinical adoption. This Review addresses the critical issues to ensure the proper development of radiomics as a biomarker and facilitate its implementation in clinical practice. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  20. Introduction of computed radiography in two mammography services: image quality and dose analysis

    International Nuclear Information System (INIS)

    Jakubiak, Rosangela R.; Gamba, Humberto R.; Ramos, Maria M.; Faversani, Gislene G.; Peixoto, Joao E.

    2008-01-01

    This study has evaluated the impact of the introduction of CR technology in the routine of two mammography services operating with Lorad Affinity X-ray units and CR Fuji Profect One system. Mean glandular dose was determined to set Automatic Exposure Control parameters according to optimal image quality. Reject analysis of patient images was made to establish the overall impact in routine work. As regards to dose evaluation, the results show that in 44% of mammography exams the MGD were above action levels. It is due to inadequate AEC adjustment and insufficient training of the staff in operating CR systems. In addition, common errors with screen/film systems like wrong breast positing and selection of exposure parameters, as well as X-ray unit failures like insufficient anti scatter grid movement still occurs. This is an indication that ongoing efforts should be concentrated in the constancy of AEC adjustment for CR image plates and staff training. Until now, brazilian health authorities did not implement any program to evaluate digital systems performance, but efforts are being done in order to provide guidelines to breast screening aiming the control of breast dose compatible with optimal image quality. (author)