WorldWideScience

Sample records for automated medical image

  1. Automated semantic indexing of imaging reports to support retrieval of medical images in the multimedia electronic medical record.

    Science.gov (United States)

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A; Mailhot, M

    1999-12-01

    This paper describes preliminary work evaluating automated semantic indexing of radiology imaging reports to represent images stored in the Image Engine multimedia medical record system at the University of Pittsburgh Medical Center. The authors used the SAPHIRE indexing system to automatically identify important biomedical concepts within radiology reports and represent these concepts with terms from the 1998 edition of the U.S. National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. This automated UMLS indexing was then compared with manual UMLS indexing of the same reports. Human indexing identified appropriate UMLS Metathesaurus descriptors for 81% of the important biomedical concepts contained in the report set. SAPHIRE automatically identified UMLS Metathesaurus descriptors for 64% of the important biomedical concepts contained in the report set. The overall conclusions of this pilot study were that the UMLS metathesaurus provided adequate coverage of the majority of the important concepts contained within the radiology report test set and that SAPHIRE could automatically identify and translate almost two thirds of these concepts into appropriate UMLS descriptors. Further work is required to improve both the recall and precision of this automated concept extraction process.

  2. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  3. The Automated Medical Office

    OpenAIRE

    1990-01-01

    With shock and surprise many physicians learned in the 1980s that they must change the way they do business. Competition for patients, increasing government regulation, and the rapidly escalating risk of litigation forces physicians to seek modern remedies in office management. The author describes a medical clinic that strives to be paperless using electronic innovation to solve the problems of medical practice management. A computer software program to automate information management in a c...

  4. The automated medical office.

    Science.gov (United States)

    Petreman, M

    1990-08-01

    With shock and surprise many physicians learned in the 1980s that they must change the way they do business. Competition for patients, increasing government regulation, and the rapidly escalating risk of litigation forces physicians to seek modern remedies in office management. The author describes a medical clinic that strives to be paperless using electronic innovation to solve the problems of medical practice management. A computer software program to automate information management in a clinic shows that practical thinking linked to advanced technology can greatly improve office efficiency.

  5. Automating the segmentation of medical images for the production of voxel tomographic computational models.

    Science.gov (United States)

    Caon, M; Mohyla, J

    2001-12-01

    Radiation dosimetry for the diagnostic medical imaging procedures performed on humans requires anatomically accurate, computational models. These may be constructed from medical images as voxel-based tomographic models. However, they are time consuming to produce and as a consequence, there are few available. This paper discusses the emergence of semi-automatic segmentation techniques and describes an application (iRAD) written in Microsoft Visual Basic that allows the bitmap of a medical image to be segmented interactively and semi-automatically while displayed in Microsoft Excel. iRAD will decrease the time required to construct voxel models.

  6. Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation.

    Science.gov (United States)

    Li, Bing Nan; Chui, Chee Kong; Chang, Stephen; Ong, S H

    2011-01-01

    The performance of the level set segmentation is subject to appropriate initialization and optimal configuration of controlling parameters, which require substantial manual intervention. A new fuzzy level set algorithm is proposed in this paper to facilitate medical image segmentation. It is able to directly evolve from the initial segmentation by spatial fuzzy clustering. The controlling parameters of level set evolution are also estimated from the results of fuzzy clustering. Moreover the fuzzy level set algorithm is enhanced with locally regularized evolution. Such improvements facilitate level set manipulation and lead to more robust segmentation. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  7. [Value of automated medical indexing of an image database and a digital radiological library].

    Science.gov (United States)

    Duvauferrier, R; Le Beux, P; Pouliquen, B; Seka, L P; Morcet, N; Rolland, Y

    1997-06-01

    We indexed the contents of a radiology server on the web to facilitate access to research documents and to link reference texts to images contained in radiology databases. Indexation also allows case reports to be transformed with no supplementary work into formats compatible with computer-assisted training. Indexation was performed automatically by ADM-Index, the aim being to identify the medical concepts expressed within each medical text. Two types of texts were indexed: medical imaging reference books (Edicerf) and case reports with illustrations and captions (Iconocerf). These documents are now available on a web server with HTML format for Edicerf and on an Oracle database for Iconocerf. When the user consults a chapter of a book or a case report, the indexed terms are displayed in the heading; all reference texts and case reports containing the indexed terms can then be called up instantaneously. The user can express his search in natural language. Indexation follows the same process allowing instantaneous recall of all reference texts and case reports where the same concept appears in the diagnosis or clinical context. By using the context of the case reports as the search index, all case reports involving a common medical concept can be found. The context is interpreted as a question. When the user responds to this question, ADM-Index compares this response with the answer furnished by the reference texts and case reports. Correct or erroneous responses can thus be identified, converting the system into a computer-assisted training tool.

  8. Medical imaging

    CERN Document Server

    Townsend, David W

    1996-01-01

    Since the introduction of the X-ray scanner into radiology almost 25 years ago, non-invasive imaging has become firmly established as an essential tool in the diagnosis of disease. Fully three-dimensional imaging of internal organs is now possible, b and for studies which explore the functional status of the body. Powerful techniques to correlate anatomy and function are available, and scanners which combine anatomical and functional imaging in a single device are under development. Such techniques have been made possible through r ecent technological and mathematical advances. This series of lectures will review both the physical basis of medical imaging techniques using X-rays, gamma and positron emitting radiosiotopes, and nuclear magnetic resonance, and the mathematical methods used to reconstruct three-dimentional distributions from projection data. The lectures will trace the development of medical imaging from simple radiographs to the present-day non-invasive measurement of in vivo biochemistry. They ...

  9. Automated Medical Literature Retrieval

    Directory of Open Access Journals (Sweden)

    David Hawking

    2012-09-01

    Full Text Available Background The constantly growing publication rate of medical research articles puts increasing pressure on medical specialists who need to be aware of the recent developments in their field. The currently used literature retrieval systems allow researchers to find specific papers; however the search task is still repetitive and time-consuming. Aims In this paper we describe a system that retrieves medical publications by automatically generating queries based on data from an electronic patient record. This allows the doctor to focus on medical issues and provide an improved service to the patient, with higher confidence that it is underpinned by current research. Method Our research prototype automatically generates query terms based on the patient record and adds weight factors for each term. Currently the patient’s age is taken into account with a fuzzy logic derived weight, and terms describing blood-related anomalies are derived from recent blood test results. Conditionally selected homonyms are used for query expansion. The query retrieves matching records from a local index of PubMed publications and displays results in descending relevance for the given patient. Recent publications are clearly highlighted for instant recognition by the researcher. Results Nine medical specialists from the Royal Adelaide Hospital evaluated the system and submitted pre-trial and post-trial questionnaires. Throughout the study we received positive feedback as doctors felt the support provided by the prototype was useful, and which they would like to use in their daily routine. Conclusion By supporting the time-consuming task of query formulation and iterative modification as well as by presenting the search results in order of relevance for the specific patient, literature retrieval becomes part of the daily workflow of busy professionals.

  10. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    Science.gov (United States)

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  11. A System For Automated Medical Photography

    Science.gov (United States)

    Tivattanasuk, Eva S.; Kaczoroski, Anthony J.; Rhodes, Michael L.

    1988-06-01

    A system is described that electronically controls the medical photography for a computed tomography (CT) scanner system. Multiple CT exams can be photographed with each image automatically adjusted to a specific gamma table presentation and positioned to any film location within a given film format. Our approach uses a library that can store 24 CT exam photography protocols. Library entries can be added, deleted, or edited. Mixed film formats, multiple image types, and automated annotation capabilities allow all CT exams to be filmed at our clinic cost-effectively and unattended. Using this automated approach to CT exam photography, one full-time equivalent CT technologist has been saved from the operational cost of our center. We outline the film protocol database, illustrate protocol options and by example, show the flexibility of this approach. Features of this system illustrate essential components of any such approach.

  12. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  13. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  14. Medical linguistics: automated indexing into SNOMED.

    Science.gov (United States)

    Wingert, F

    1988-01-01

    This paper reviews the state of the art in processing medical language data. The area is divided into the topics: (1) morphologic analysis, (2) syntactic analysis, (3) semantic analysis, and (4) pragmatics. Additional attention is given to medical nomenclatures and classifications as the bases of (automated) indexing procedures which are required whenever medical information is formalized. These topics are completed by an evaluation of related data structures and methods used to organize language-based medical knowledge.

  15. Automated Orientation of Aerial Images

    DEFF Research Database (Denmark)

    Høhle, Joachim

    2002-01-01

    Methods for automated orientation of aerial images are presented. They are based on the use of templates, which are derived from existing databases, and area-based matching. The characteristics of available database information and the accuracy requirements for map compilation and orthoimage...

  16. Generative Interpretation of Medical Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2004-01-01

    This thesis describes, proposes and evaluates methods for automated analysis and quantification of medical images. A common theme is the usage of generative methods, which draw inference from unknown images by synthesising new images having shape, pose and appearance similar to the analysed image...... fraction from 4D cardiac cine MRI, myocardial perfusion in bolus passage cardiac perfusion MRI, corpus callosum shape and area in mid-sagittal brain MRI, and finally, lung, heart, clavicle location and cardiothoracic ratio in anterior-posterior chest radiographs....

  17. Color Medical Image Analysis

    CERN Document Server

    Schaefer, Gerald

    2013-01-01

    Since the early 20th century, medical imaging has been dominated by monochrome imaging modalities such as x-ray, computed tomography, ultrasound, and magnetic resonance imaging. As a result, color information has been overlooked in medical image analysis applications. Recently, various medical imaging modalities that involve color information have been introduced. These include cervicography, dermoscopy, fundus photography, gastrointestinal endoscopy, microscopy, and wound photography. However, in comparison to monochrome images, the analysis of color images is a relatively unexplored area. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for monochrome images are often not directly applicable to multichannel images. The goal of this volume is to summarize the state-of-the-art in the utilization of color information in medical image analysis.

  18. Medical Image Fusion

    Directory of Open Access Journals (Sweden)

    Mitra Rafizadeh

    2007-08-01

    Full Text Available Technological advances in medical imaging in the past two decades have enable radiologists to create images of the human body with unprecedented resolution. MRI, PET,... imaging devices can quickly acquire 3D images. Image fusion establishes an anatomical correlation between corresponding images derived from different examination. This fusion is applied either to combine images of different modalities (CT, MRI or single modality (PET-PET."nImage fusion is performed in two steps:"n1 Registration: spatial modification (eg. translation of model image relative to reference image in order to arrive at an ideal matching of both images. Registration methods are feature-based and intensity-based approaches."n2 Visualization: the goal of it is to depict the spatial relationship between the model image and refer-ence image. We can point out its clinical application in nuclear medicine (PET/CT.

  19. Medical imaging systems

    Science.gov (United States)

    Frangioni, John V

    2013-06-25

    A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.

  20. Medical ultrasound imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2007-01-01

    The paper gives an introduction to current medical ultrasound imaging systems. The basics of anatomic and blood flow imaging are described. The properties of medical ultrasound and its focusing are described, and the various methods for two- and three-dimensional imaging of the human anatomy...... are shown. Both systems using linear and non-linear propagation of ultrasound are described. The blood velocity can also be non-invasively visualized using ultrasound and the basic signal processing for doing this is introduced. Examples for spectral velocity estimation, color flow maging and the new vector...

  1. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    S P Vimal; P K Thiruvikraman

    2012-12-01

    We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the peaks corresponding to the background and the foreground are not widely separated.

  2. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  3. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition...... to segment breast tissue and pectoral muscle area from the background in mammogram. The second focus is the choices of metric and its influence to the feasibility of a classifier, especially on k-nearest neighbors (k-NN) algorithm, with medical applications on breast cancer prediction and calcification...

  4. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  5. Comparison of automated and manual segmentation of hippocampus MR images

    Science.gov (United States)

    Haller, John W.; Christensen, Gary E.; Miller, Michael I.; Joshi, Sarang C.; Gado, Mokhtar; Csernansky, John G.; Vannier, Michael W.

    1995-05-01

    The precision and accuracy of area estimates from magnetic resonance (MR) brain images and using manual and automated segmentation methods are determined. Areas of the human hippocampus were measured to compare a new automatic method of segmentation with regions of interest drawn by an expert. MR images of nine normal subjects and nine schizophrenic patients were acquired with a 1.5-T unit (Siemens Medical Systems, Inc., Iselin, New Jersey). From each individual MPRAGE 3D volume image a single comparable 2-D slice (matrix equals 256 X 256) was chosen which corresponds to the same coronal slice of the hippocampus. The hippocampus was first manually segmented, then segmented using high dimensional transformations of a digital brain atlas to individual brain MR images. The repeatability of a trained rater was assessed by comparing two measurements from each individual subject. Variability was also compared within and between subject groups of schizophrenics and normal subjects. Finally, the precision and accuracy of automated segmentation of hippocampal areas were determined by comparing automated measurements to manual segmentation measurements made by the trained rater on MR and brain slice images. The results demonstrate the high repeatability of area measurement from MR images of the human hippocampus. Automated segmentation using high dimensional transformations from a digital brain atlas provides repeatability superior to that of manual segmentation. Furthermore, the validity of automated measurements was demonstrated by a high correlation with manual segmentation measurements made by a trained rater. Quantitative morphometry of brain substructures (e.g. hippocampus) is feasible by use of a high dimensional transformation of a digital brain atlas to an individual MR image. This method automates the search for neuromorphological correlates of schizophrenia by a new mathematically robust method with unprecedented sensitivity to small local and regional differences.

  6. Analyzing and mining automated imaging experiments.

    Science.gov (United States)

    Berlage, Thomas

    2007-04-01

    Image mining is the application of computer-based techniques that extract and exploit information from large image sets to support human users in generating knowledge from these sources. This review focuses on biomedical applications of this technique, in particular automated imaging at the cellular level. Due to increasing automation and the availability of integrated instruments, biomedical users are becoming increasingly confronted with the problem of analyzing such data. Image database applications need to combine data management, image analysis and visual data mining. The main point of such a system is a software layer that represents objects within an image and the ability to use a large spectrum of quantitative and symbolic object features. Image analysis needs to be adapted to each particular experiment; therefore, 'end user programming' will be desired to make the technology more widely applicable.

  7. Wavelets in medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H. [Sharda University, SET, Department of Electronics and Communication, Knowledge Park 3rd, Gr. Noida (India); University of Kocaeli, Department of Mathematics, 41380 Kocaeli (Turkey); Istanbul Aydin University, Department of Computer Engineering, 34295 Istanbul (Turkey); Sharda University, SET, Department of Mathematics, 32-34 Knowledge Park 3rd, Greater Noida (India)

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  8. Medical Image Analysis Facility

    Science.gov (United States)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  9. Semantic annotation of medical images

    Science.gov (United States)

    Seifert, Sascha; Kelm, Michael; Moeller, Manuel; Mukherjee, Saikat; Cavallaro, Alexander; Huber, Martin; Comaniciu, Dorin

    2010-03-01

    Diagnosis and treatment planning for patients can be significantly improved by comparing with clinical images of other patients with similar anatomical and pathological characteristics. This requires the images to be annotated using common vocabulary from clinical ontologies. Current approaches to such annotation are typically manual, consuming extensive clinician time, and cannot be scaled to large amounts of imaging data in hospitals. On the other hand, automated image analysis while being very scalable do not leverage standardized semantics and thus cannot be used across specific applications. In our work, we describe an automated and context-sensitive workflow based on an image parsing system complemented by an ontology-based context-sensitive annotation tool. An unique characteristic of our framework is that it brings together the diverse paradigms of machine learning based image analysis and ontology based modeling for accurate and scalable semantic image annotation.

  10. Automated Image Data Exploitation Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, C; Poland, D; Sengupta, S K; Futterman, J H

    2004-01-26

    The automated production of maps of human settlement from recent satellite images is essential to detailed studies of urbanization, population movement, and the like. Commercial satellite imagery is becoming available with sufficient spectral and spatial resolution to apply computer vision techniques previously considered only for laboratory (high resolution, low noise) images. In this project, we extracted the boundaries of human settlements from IKONOS 4-band and panchromatic images using spectral segmentation together with a form of generalized second-order statistics and detection of edges and corners.

  11. Plenoptic Imager for Automated Surface Navigation

    Science.gov (United States)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  12. Medical alert bracelet (image)

    Science.gov (United States)

    People with diabetes should always wear a medical alert bracelet or necklace that emergency medical workers will be able to find. Medical identification products can help ensure proper treatment in an ...

  13. Automated spectral imaging for clinical diagnostics

    Science.gov (United States)

    Breneman, John; Heffelfinger, David M.; Pettipiece, Ken; Tsai, Chris; Eden, Peter; Greene, Richard A.; Sorensen, Karen J.; Stubblebine, Will; Witney, Frank

    1998-04-01

    Bio-Rad Laboratories supplies imaging equipment for many applications in the life sciences. As part of our effort to offer more flexibility to the investigator, we are developing a microscope-based imaging spectrometer for the automated detection and analysis of either conventionally or fluorescently labeled samples. Immediate applications will include the use of fluorescence in situ hybridization (FISH) technology. The field of cytogenetics has benefited greatly from the increased sensitivity of FISH producing simplified analysis of complex chromosomal rearrangements. FISH methods for identification lends itself to automation more easily than the current cytogenetics industry standard of G- banding, however, the methods are complementary. Several technologies have been demonstrated successfully for analyzing the signals from labeled samples, including filter exchanging and interferometry. The detection system lends itself to other fluorescent applications including the display of labeled tissue sections, DNA chips, capillary electrophoresis or any other system using color as an event marker. Enhanced displays of conventionally stained specimens will also be possible.

  14. Prehospital digital photography and automated image transmission in an emergency medical service – an ancillary retrospective analysis of a prospective controlled trial

    Directory of Open Access Journals (Sweden)

    Bergrath Sebastian

    2013-01-01

    Full Text Available Abstract Background Still picture transmission was performed using a telemedicine system in an Emergency Medical Service (EMS during a prospective, controlled trial. In this ancillary, retrospective study the quality and content of the transmitted pictures and the possible influences of this application on prehospital time requirements were investigated. Methods A digital camera was used with a telemedicine system enabling encrypted audio and data transmission between an ambulance and a remotely located physician. By default, images were compressed (jpeg, 640 x 480 pixels. On occasion, this compression was deactivated (3648 x 2736 pixels. Two independent investigators assessed all transmitted pictures according to predefined criteria. In cases of different ratings, a third investigator had final decision competence. Patient characteristics and time intervals were extracted from the EMS protocol sheets and dispatch centre reports. Results Overall 314 pictures (mean 2.77 ± 2.42 pictures/mission were transmitted during 113 missions (group 1. Pictures were not taken for 151 missions (group 2. Regarding picture quality, the content of 240 (76.4% pictures was clearly identifiable; 45 (14.3% pictures were considered “limited quality” and 29 (9.2% pictures were deemed “not useful” due to not/hardly identifiable content. For pictures with file compression (n = 84 missions and without (n = 17 missions, the content was clearly identifiable in 74% and 97% of the pictures, respectively (p = 0.003. Medical reports (n = 98, 32.8%, medication lists (n = 49, 16.4% and 12-lead ECGs (n = 28, 9.4% were most frequently photographed. The patient characteristics of group 1 vs. 2 were as follows: median age – 72.5 vs. 56.5 years, p = 0.001; frequency of acute coronary syndrome – 24/113 vs. 15/151, p = 0.014. The NACA scores and gender distribution were comparable. Median on-scene times were longer with picture

  15. Automated Image Retrieval of Chest CT Images Based on Local Grey Scale Invariant Features.

    Science.gov (United States)

    Arrais Porto, Marcelo; Cordeiro d'Ornellas, Marcos

    2015-01-01

    Textual-based tools are regularly employed to retrieve medical images for reading and interpretation using current retrieval Picture Archiving and Communication Systems (PACS) but pose some drawbacks. All-purpose content-based image retrieval (CBIR) systems are limited when dealing with medical images and do not fit well into PACS workflow and clinical practice. This paper presents an automated image retrieval approach for chest CT images based local grey scale invariant features from a local database. Performance was measured in terms of precision and recall, average retrieval precision (ARP), and average retrieval rate (ARR). Preliminary results have shown the effectiveness of the proposed approach. The prototype is also a useful tool for radiology research and education, providing valuable information to the medical and broader healthcare community.

  16. Desktop supercomputers. Advance medical imaging.

    Science.gov (United States)

    Frisiello, R S

    1991-02-01

    Medical imaging tools that radiologists as well as a wide range of clinicians and healthcare professionals have come to depend upon are emerging into the next phase of functionality. The strides being made in supercomputing technologies--including reduction of size and price--are pushing medical imaging to a new level of accuracy and functionality.

  17. Distributed Object Medical Imaging Model

    CERN Document Server

    Noor, Ahmad Shukri Mohd

    2009-01-01

    Digital medical informatics and images are commonly used in hospitals today,. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM) to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients' data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common...

  18. Automated morphological analysis approach for classifying colorectal microscopic images

    Science.gov (United States)

    Marghani, Khaled A.; Dlay, Satnam S.; Sharif, Bayan S.; Sims, Andrew J.

    2003-10-01

    Automated medical image diagnosis using quantitative measurements is extremely helpful for cancer prognosis to reach a high degree of accuracy and thus make reliable decisions. In this paper, six morphological features based on texture analysis were studied in order to categorize normal and cancer colon mucosa. They were derived after a series of pre-processing steps to generate a set of different shape measurements. Based on the shape and the size, six features known as Euler Number, Equivalent Diamater, Solidity, Extent, Elongation, and Shape Factor AR were extracted. Mathematical morphology is used firstly to remove background noise from segmented images and then to obtain different morphological measures to describe shape, size, and texture of colon glands. The automated system proposed is tested to classifying 102 microscopic samples of colorectal tissues, which consist of 44 normal color mucosa and 58 cancerous. The results were first statistically evaluated, using one-way ANOVA method in order to examine the significance of each feature extracted. Then significant features are selected in order to classify the dataset into two categories. Finally, using two discrimination methods; linear method and k-means clustering, important classification factors were estimated. In brief, this study demonstrates that abnormalities in low-level power tissue morphology can be distinguished using quantitative image analysis. This investigation shows the potential of an automated vision system in histopathology. Furthermore, it has the advantage of being objective, and more importantly a valuable diagnostic decision support tool.

  19. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients....... Reviews have suggested that up to 50% of the adverse events in the medication process may be preventable. Thus the medication process is an important means to improve safety. Purpose The objective of this study was to evaluate the effectiveness of two automated medication systems in reducing...... the medication administration error rate in comparison with current practice. Material and methods This was a controlled before and after study with follow-up after 7 and 14 months. The study was conducted in two acute medical hospital wards. Two automated medication systems were tested: (1) automated dispensing...

  20. Automated assessment of medical training evaluation text.

    Science.gov (United States)

    Zhang, Rui; Pakhomov, Serguei; Gladding, Sophia; Aylward, Michael; Borman-Shoap, Emily; Melton, Genevieve B

    2012-01-01

    Medical post-graduate residency training and medical student training increasingly utilize electronic systems to evaluate trainee performance based on defined training competencies with quantitative and qualitative data, the later of which typically consists of text comments. Medical education is concomitantly becoming a growing area of clinical research. While electronic systems have proliferated in number, little work has been done to help manage and analyze qualitative data from these evaluations. We explored the use of text-mining techniques to assist medical education researchers in sentiment analysis and topic analysis of residency evaluations with a sample of 812 evaluation statements. While comments were predominantly positive, sentiment analysis improved the ability to discriminate statements with 93% accuracy. Similar to other domains, Latent Dirichlet Analysis and Information Gain revealed groups of core subjects and appear to be useful for identifying topics from this data.

  1. Medical hyperspectral imaging: a review.

    Science.gov (United States)

    Lu, Guolan; Fei, Baowei

    2014-01-01

    Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the tissue physiology, morphology, and composition. This review paper presents an overview of the literature on medical hyperspectral imaging technology and its applications. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application.

  2. Automated landmark-guided deformable image registration

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  3. Medical imaging technology and applications

    CERN Document Server

    Iniewski, Krzysztof

    2014-01-01

    The book has two intentions. First, it assembles the latest research in the field of medical imaging technology in one place. Detailed descriptions of current state-of-the-art medical imaging systems (comprised of x-ray CT, MRI, ultrasound, and nuclear medicine) and data processing techniques are discussed. Information is provided that will give interested engineers and scientists a solid foundation from which to build with additional resources. Secondly, it exposes the reader to myriad applications that medical imaging technology has enabled.

  4. Automated Quality Assurance Applied to Mammographic Imaging

    Directory of Open Access Journals (Sweden)

    Anne Davis

    2002-07-01

    Full Text Available Quality control in mammography is based upon subjective interpretation of the image quality of a test phantom. In order to suppress subjectivity due to the human observer, automated computer analysis of the Leeds TOR(MAM test phantom is investigated. Texture analysis via grey-level co-occurrence matrices is used to detect structures in the test object. Scoring of the substructures in the phantom is based on grey-level differences between regions and information from grey-level co-occurrence matrices. The results from scoring groups of particles within the phantom are presented.

  5. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  6. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  7. Machine Learning for Medical Imaging.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. (©)RSNA, 2017.

  8. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  9. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  10. Distributed Object Medical Imaging Model

    Directory of Open Access Journals (Sweden)

    Ahmad Shukri Mohd Noor

    2009-09-01

    Full Text Available Digital medical informatics and images are commonly used in hospitals today. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients' data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common Object Request Broker Architecture (CORBA, Java Database Connectivity (JDBC, and Java language provide the capability to combine the DOMIM resources into an integrated, interoperable, and scalable system. The underneath technology, including IDL ORB, Event Service, IIOP JDBC/ODBC, legacy system wrapping and Java implementation are explored. This paper explores a distributed collaborative CORBA/JDBC based framework that will enhance medical information management requirements and development. It encompasses a new paradigm for the delivery of health services that requires process reengineering, cultural changes, as well as organizational changes.

  11. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... phantom and in-vivo measurements. The technical performance were compared to conventional beamforming and gave motivation to continue to phase two. The second phase evaluated the clinical performance of abdominal imaging in a pre-clinical trial in comparison with conventional imaging, and were conducted...

  12. Automated vertebra identification in CT images

    Science.gov (United States)

    Ehm, Matthias; Klinder, Tobias; Kneser, Reinhard; Lorenz, Cristian

    2009-02-01

    In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine column is a very challenging task, especially if no surrounding reference structures can be taken into account. Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically labeling the spine column by evaluating similarities between given models and vertebral objects. In one method, object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each vertebral object. In the other method, appearance models containing mean gray value information are registered to each vertebral object using cross and local correlation as similarity measures for the optimization function. The GHT is advantageous in terms of computational performance but cuts back concerning the identification rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting of 63 disparate input images using rigid image registration with local correlation as similarity measure.

  13. Distributed Automated Medical Robotics to Improve Medical Field Operations

    Science.gov (United States)

    2010-04-01

    and through animal and human cadaveric studies in collaboration with anesthesiologists and trauma surgeons at the Massachusetts General Hospital...Medical Robotics and Computer Assisted Surgery. 2009;5(2):136-46. [12] Hanly EJ, Marohn MR, Schenkman NS, Miller BE, Moses GR, Marchessault R

  14. Automated electronic medical record sepsis detection in the emergency department

    OpenAIRE

    Su Q. Nguyen; Edwin Mwakalindile; Booth, James S.; Vicki Hogan; Jordan Morgan; Prickett, Charles T; Donnelly, John P; Wang, Henry E.

    2014-01-01

    Background. While often first treated in the emergency department (ED), identification of sepsis is difficult. Electronic medical record (EMR) clinical decision tools offer a novel strategy for identifying patients with sepsis. The objective of this study was to test the accuracy of an EMR-based, automated sepsis identification system. Methods. We tested an EMR-based sepsis identification tool at a major academic, urban ED with 64,000 annual visits. The EMR system collected vital sign and lab...

  15. Classification of Medical Brain Images

    Institute of Scientific and Technical Information of China (English)

    Pan Haiwei(潘海为); Li Jianzhong; Zhang Wei

    2003-01-01

    Since brain tumors endanger people's living quality and even their lives, the accuracy of classification becomes more important. Conventional classifying techniques are used to deal with those datasets with characters and numbers. It is difficult, however, to apply them to datasets that include brain images and medical history (alphanumeric data), especially to guarantee the accuracy. For these datasets, this paper combines the knowledge of medical field and improves the traditional decision tree. The new classification algorithm with the direction of the medical knowledge not only adds the interaction with the doctors, but also enhances the quality of classification. The algorithm has been used on real brain CT images and a precious rule has been gained from the experiments. This paper shows that the algorithm works well for real CT data.

  16. Toward Automated Feature Detection in UAVSAR Images

    Science.gov (United States)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.

    2014-12-01

    Edge detection identifies seismic or aseismic fault motion, as demonstrated in repeat-pass inteferograms obtained by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) program. But this identification is not robust at present: it requires a flattened background image, interpolation into missing data (holes) and outliers, and background noise that is either sufficiently small or roughly white Gaussian. Identification and mitigation of nongaussian background image noise is essential to creating a robust, automated system to search for such features. Clearly a robust method is needed for machine scanning of the thousands of UAVSAR repeat-pass interferograms for evidence of fault slip, landslides, and other local features.Empirical examination of detrended noise based on 20 km east-west profiles through desert terrain with little tectonic deformation for a suite of flight interferograms shows nongaussian characteristics. Statistical measurement of curvature with varying length scale (Allan variance) shows nearly white behavior (Allan variance slope with spatial distance from roughly -1.76 to -2) from 25 to 400 meters, deviations from -2 suggesting short-range differences (such as used in detecting edges) are often freer of noise than longer-range differences. At distances longer than 400 m the Allan variance flattens out without consistency from one interferogram to another. We attribute this additional noise afflicting difference estimates at longer distances to atmospheric water vapor and uncompensated aircraft motion.Paradoxically, California interferograms made with increasing time intervals before and after the El Mayor Cucapah earthquake (2008, M7.2, Mexico) show visually stronger and more interesting edges, but edge detection methods developed for the first year do not produce reliable results over the first two years, because longer time spans suffer reduced coherence in the interferogram. The changes over time are reflecting fault slip and block

  17. Fundamental mathematics and physics of medical imaging

    CERN Document Server

    Lancaster, Jack

    2016-01-01

    Authored by a leading educator, this book is ideal for medical imaging courses. Rather than focus on imaging modalities the book delves into the mechanisms of image formation and image quality common to all imaging systems: contrast mechanisms, noise, and spatial and temporal resolution. This is an extensively revised new edition of The Physics of Medical X-Ray Imaging by Bruce Hasegawa (Medical Physics Publishing, 1991). A wide range of modalities are covered including X-ray CT, MRI and SPECT.

  18. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  19. Medical imaging, PACS, and imaging informatics: retrospective.

    Science.gov (United States)

    Huang, H K

    2014-01-01

    Historical reviews of PACS (picture archiving and communication system) and imaging informatics development from different points of view have been published in the past (Huang in Euro J Radiol 78:163-176, 2011; Lemke in Euro J Radiol 78:177-183, 2011; Inamura and Jong in Euro J Radiol 78:184-189, 2011). This retrospective attempts to look at the topic from a different angle by identifying certain basic medical imaging inventions in the 1960s and 1970s which had conceptually defined basic components of PACS guiding its course of development in the 1980s and 1990s, as well as subsequent imaging informatics research in the 2000s. In medical imaging, the emphasis was on the innovations at Georgetown University in Washington, DC, in the 1960s and 1970s. During the 1980s and 1990s, research and training support from US government agencies and public and private medical imaging manufacturers became available for training of young talents in biomedical physics and for developing the key components required for PACS development. In the 2000s, computer hardware and software as well as communication networks advanced by leaps and bounds, opening the door for medical imaging informatics to flourish. Because many key components required for the PACS operation were developed by the UCLA PACS Team and its collaborative partners in the 1980s, this presentation is centered on that aspect. During this period, substantial collaborative research efforts by many individual teams in the US and in Japan were highlighted. Credits are due particularly to the Pattern Recognition Laboratory at Georgetown University, and the computed radiography (CR) development at the Fuji Electric Corp. in collaboration with Stanford University in the 1970s; the Image Processing Laboratory at UCLA in the 1980s-1990s; as well as the early PACS development at the Hokkaido University, Sapporo, Japan, in the late 1970s, and film scanner and digital radiography developed by Konishiroku Photo Ind. Co. Ltd

  20. Archimedes, an archive of medical images.

    Science.gov (United States)

    Tahmoush, Dave; Samet, Hanan

    2006-01-01

    We present a medical image and medical record database for the storage, research, transmission, and evaluation of medical images. Medical images from any source that supports the DICOM standard can be stored and accessed, as well as associated analysis and annotations. Retrieval is based on patient info, date, doctor's annotations, features in the images, or a spatial combination. This database supports the secure transmission of sensitive data for tele-medicine and follows all HIPPA regulations.

  1. Automated Segmentation of Cardiac Magnetic Resonance Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Nilsson, Jens Chr.; Grønning, Bjørn A.

    2001-01-01

    is based on determination of the left-ventricular endocardial and epicardial borders. Since manual border detection is laborious, automated segmentation is highly desirable as a fast, objective and reproducible alternative. Automated segmentation will thus enhance comparability between and within cardiac...... studies and increase accuracy by allowing acquisition of thinner MRI-slices. This abstract demonstrates that statistical models of shape and appearance, namely the deformable models: Active Appearance Models, can successfully segment cardiac MRIs....

  2. Content-based retrieval based on binary vectors for 2-D medical images

    Institute of Scientific and Technical Information of China (English)

    龚鹏; 邹亚东; 洪海

    2003-01-01

    In medical research and clinical diagnosis, automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. To facilitate the decision-making in the health-care and the related areas, in this paper, a two-step content-based medical image retrieval algorithm is proposed. Firstly, in the preprocessing step, the image segmentation is performed to distinguish image objects, and on the basis of the ...

  3. Medical image retrieval based on plaque appearance and image registration.

    Science.gov (United States)

    Amores, Jaume; Radeva, Petia

    2005-01-01

    The increasing amount of medical images produced and stored daily in hospitals needs a datrabase management system that organizes them in a meaningful way, without the necessity of time-consuming textual annotations for each image. One of the basic ways to organize medical images in taxonomies consists of clustering them depending of plaque appearance (for example, intravascular ultrasound images). Although lately, there has been a lot of research in the field of Content-Based Image Retrieval systems, mostly these systems are designed for dealing a wide range of images but not medical images. Medical image retrieval by content is still an emerging field, and few works are presented in spite of the obvious applications and the complexity of the images demanding research studies. In this chapter, we overview the work on medical image retrieval and present a general framework of medical image retrieval based on plaque appearance. We stress on two basic features of medical image retrieval based on plaque appearance: plaque medical images contain complex information requiring not only local and global descriptors but also context determined by image features and their spatial relations. Additionally, given that most objects in medical images usually have high intra- and inter-patient shape variance, retrieval based on plaque should be invariant to a family of transformations predetermined by the application domain. To illustrate the medical image retrieval based on plaque appearance, we consider a specific image modality: intravascular ultrasound images and present extensive results on the retrieval performance.

  4. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Directory of Open Access Journals (Sweden)

    Oscar Beijbom

    Full Text Available Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.

  5. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

    Science.gov (United States)

    Beijbom, Oscar; Edmunds, Peter J.; Roelfsema, Chris; Smith, Jennifer; Kline, David I.; Neal, Benjamin P.; Dunlap, Matthew J.; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B. Greg; Kriegman, David

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157

  6. A machine learning approach to quantifying noise in medical images

    Science.gov (United States)

    Chowdhury, Aritra; Sevinsky, Christopher J.; Yener, Bülent; Aggour, Kareem S.; Gustafson, Steven M.

    2016-03-01

    As advances in medical imaging technology are resulting in significant growth of biomedical image data, new techniques are needed to automate the process of identifying images of low quality. Automation is needed because it is very time consuming for a domain expert such as a medical practitioner or a biologist to manually separate good images from bad ones. While there are plenty of de-noising algorithms in the literature, their focus is on designing filters which are necessary but not sufficient for determining how useful an image is to a domain expert. Thus a computational tool is needed to assign a score to each image based on its perceived quality. In this paper, we introduce a machine learning-based score and call it the Quality of Image (QoI) score. The QoI score is computed by combining the confidence values of two popular classification techniques—support vector machines (SVMs) and Naïve Bayes classifiers. We test our technique on clinical image data obtained from cancerous tissue samples. We used 747 tissue samples that are stained by four different markers (abbreviated as CK15, pck26, E_cad and Vimentin) leading to a total of 2,988 images. The results show that images can be classified as good (high QoI), bad (low QoI) or ugly (intermediate QoI) based on their QoI scores. Our automated labeling is in agreement with the domain experts with a bi-modal classification accuracy of 94%, on average. Furthermore, ugly images can be recovered and forwarded for further post-processing.

  7. Cloud computing in medical imaging.

    Science.gov (United States)

    Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R

    2013-07-01

    Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.

  8. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols...

  9. Medical Image Retrieval: A Multimodal Approach.

    Science.gov (United States)

    Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning

    2014-01-01

    Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system.

  10. User Oriented Platform for Data Analytics in Medical Imaging Repositories.

    Science.gov (United States)

    Valerio, Miguel; Godinho, Tiago Marques; Costa, Carlos

    2016-01-01

    The production of medical imaging studies and associated data has been growing in the last decades. Their primary use is to support medical diagnosis and treatment processes. However, the secondary use of the tremendous amount of stored data is generally more limited. Nowadays, medical imaging repositories have turned into rich databanks holding not only the images themselves, but also a wide range of metadata related to the medical practice. Exploring these repositories through data analysis and business intelligence techniques has the potential of increasing the efficiency and quality of the medical practice. Nevertheless, the continuous production of tremendous amounts of data makes their analysis difficult by conventional approaches. This article proposes a novel automated methodology to derive knowledge from medical imaging repositories that does not disrupt the regular medical practice. Our method is able to apply statistical analysis and business intelligence techniques directly on top of live institutional repositories. It is a Web-based solution that provides extensive dashboard capabilities, including complete charting and reporting options, combined with data mining components. Moreover, it enables the operator to set a wide multitude of query parameters and operators through the use of an intuitive graphical interface.

  11. Automated Real-Time Conjunctival Microvasculature Image Stabilization.

    Science.gov (United States)

    Felder, Anthony E; Mercurio, Cesare; Wanek, Justin; Ansari, Rashid; Shahidi, Mahnaz

    2016-07-01

    The bulbar conjunctiva is a thin, vascularized membrane covering the sclera of the eye. Non-invasive imaging techniques have been utilized to assess the conjunctival vasculature as a means of studying microcirculatory hemodynamics. However, eye motion often confounds quantification of these hemodynamic properties. In the current study, we present a novel optical imaging system for automated stabilization of conjunctival microvasculature images by real-time eye motion tracking and realignment of the optical path. The ability of the system to stabilize conjunctival images acquired over time by reducing image displacements and maintaining the imaging area was demonstrated.

  12. Automated de-identification of free-text medical records

    Directory of Open Access Journals (Sweden)

    Long William J

    2008-07-01

    Full Text Available Abstract Background Text-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA requires that protected health information (PHI be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification. Methods We describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus. Results Performance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either

  13. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  14. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  15. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  16. Fuzzy emotional semantic analysis and automated annotation of scene images.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  17. Automated Localization of Optic Disc in Retinal Images

    Directory of Open Access Journals (Sweden)

    Deepali A.Godse

    2013-03-01

    Full Text Available An efficient detection of optic disc (OD in colour retinal images is a significant task in an automated retinal image analysis system. Most of the algorithms developed for OD detection are especially applicable to normal and healthy retinal images. It is a challenging task to detect OD in all types of retinal images, that is, normal, healthy images as well as abnormal, that is, images affected due to disease. This paper presents an automated system to locate an OD and its centre in all types of retinal images. The ensemble of steps based on different criteria produces more accurate results. The proposed algorithm gives excellent results and avoids false OD detection. The technique is developed and tested on standard databases provided for researchers on internet, Diaretdb0 (130 images, Diaretdb1 (89 images, Drive (40 images and local database (194 images. The local database images are collected from ophthalmic clinics. It is able to locate OD and its centre in 98.45% of all tested cases. The results achieved by different algorithms can be compared when algorithms are applied on same standard databases. This comparison is also discussed in this paper which shows that the proposed algorithm is more efficient.

  18. Automated image registration for FDOPA PET studies

    Science.gov (United States)

    Lin, Kang-Ping; Huang, Sung-Cheng; Yu, Dan-Chu; Melega, William; Barrio, Jorge R.; Phelps, Michael E.

    1996-12-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention.

  19. Morphological Techniques for Medical Images: A Review

    Directory of Open Access Journals (Sweden)

    Isma Irum

    2012-08-01

    Full Text Available Image processing is playing a very important role in medical imaging with its versatile applications and features towards the development of computer aided diagnostic systems, automatic detections of abnormalities and enhancement in ultrasonic, computed tomography, magnetic resonance images and lots more applications. Medical images morphology is a field of study where the medical images are observed and processed on basis of geometrical and changing structures. Medical images morphological techniques has been reviewed in this study underlying the some human organ images, the associated diseases and processing techniques to address some anatomical problem detection. Images of Human brain, bone, heart, carotid, iris, lesion, liver and lung have been discussed in this study.

  20. Watermarking patient data in encrypted medical images

    Indian Academy of Sciences (India)

    A Lavanya; V Natarajan

    2012-12-01

    In this paper, we propose a method for watermarking medical images for data integrity which consists of image encryption, data embedding and image-recovery phases. Data embedding can be completely recovered from the watermarked image after the watermark has been extracted. In the proposed method, we utilize standard stream cipher for image encryption and selecting non-region of interest tile to embed patient data. We show that the lower bound of the PSNR (peak-signal-to-noise-ratio) values for medical images is about 48 dB. Experimental results demonstrate that the proposed scheme can embed a large amount of data while keeping high visual quality of test images.

  1. Automated model-based calibration of imaging spectrographs

    Science.gov (United States)

    Kosec, Matjaž; Bürmen, Miran; Tomaževič, Dejan; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Hyper-spectral imaging has gained recognition as an important non-invasive research tool in the field of biomedicine. Among the variety of available hyperspectral imaging systems, systems comprising an imaging spectrograph, lens, wideband illumination source and a corresponding camera stand out for the short acquisition time and good signal to noise ratio. The individual images acquired by imaging spectrograph-based systems contain full spectral information along one spatial dimension. Due to the imperfections in the camera lens and in particular the optical components of the imaging spectrograph, the acquired images are subjected to spatial and spectral distortions, resulting in scene dependent nonlinear spectral degradations and spatial misalignments which need to be corrected. However, the existing correction methods require complex calibration setups and a tedious manual involvement, therefore, the correction of the distortions is often neglected. Such simplified approach can lead to significant errors in the analysis of the acquired hyperspectral images. In this paper, we present a novel fully automated method for correction of the geometric and spectral distortions in the acquired images. The method is based on automated non-rigid registration of the reference and acquired images corresponding to the proposed calibration object incorporating standardized spatial and spectral information. The obtained transformation was successfully used for sub-pixel correction of various hyperspectral images, resulting in significant improvement of the spectral and spatial alignment. It was found that the proposed calibration is highly accurate and suitable for routine use in applications involving either diffuse reflectance or transmittance measurement setups.

  2. PERFORMANCE EVALUATION OF CONTENT BASED IMAGE RETRIEVAL FOR MEDICAL IMAGES

    Directory of Open Access Journals (Sweden)

    SASI KUMAR. M

    2013-04-01

    Full Text Available Content-based image retrieval (CBIR technology benefits not only large image collections management, but also helps clinical care, biomedical research, and education. Digital images are found in X-Rays, MRI, CT which are used for diagnosing and planning treatment schedules. Thus, visual information management is challenging as the data quantity available is huge. Currently, available medical databases utilization is limited image retrieval issues. Archived digital medical images retrieval is always challenging and this is being researched more as images are of great importance in patient diagnosis, therapy, medical reference, and medical training. In this paper, an image matching scheme using Discrete Sine Transform for relevant feature extraction is presented. The efficiency of different algorithm for classifying the features to retrieve medical images is investigated.

  3. Automated morphometry of transgenic mouse brains in MR images

    NARCIS (Netherlands)

    Scheenstra, Alize Elske Hiltje

    2011-01-01

    Quantitative and local morphometry of mouse brain MRI is a relatively new field of research, where automated methods can be exploited to rapidly provide accurate and repeatable results. In this thesis we reviewed several existing methods and applications of quantitative morphometry to brain MR image

  4. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Kanstrup, Anne-Marie Fiehn; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  5. Medication administration errors in nursing homes using an automated medication dispensing system.

    Science.gov (United States)

    van den Bemt, Patricia M L A; Idzinga, Jetske C; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske

    2009-01-01

    OBJECTIVE To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. DESIGN The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. MEASUREMENTS Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. RESULTS In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05-1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66-46.50), medication crushed (OR 7.83; 95% CI 5.40-11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01-1.05), nursing home 2 (OR 3.97; 95% CI 2.86-5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04-4.18), time classes "7-10 am" (OR 2.28; 95% CI 1.50-3.47) and "10 am-2 pm" (OR 1.96; 1.18-3.27) and day of the week "Wednesday" (OR 1.46; 95% CI 1.03-2.07) are associated with a higher risk of administration errors. CONCLUSIONS Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload.

  6. Automated quality assessment in three-dimensional breast ultrasound images.

    Science.gov (United States)

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects.

  7. Automated Archiving of Archaeological Aerial Images

    Directory of Open Access Journals (Sweden)

    Michael Doneus

    2016-03-01

    Full Text Available The main purpose of any aerial photo archive is to allow quick access to images based on content and location. Therefore, next to a description of technical parameters and depicted content, georeferencing of every image is of vital importance. This can be done either by identifying the main photographed object (georeferencing of the image content or by mapping the center point and/or the outline of the image footprint. The paper proposes a new image archiving workflow. The new pipeline is based on the parameters that are logged by a commercial, but cost-effective GNSS/IMU solution and processed with in-house-developed software. Together, these components allow one to automatically geolocate and rectify the (oblique aerial images (by a simple planar rectification using the exterior orientation parameters and to retrieve their footprints with reasonable accuracy, which is automatically stored as a vector file. The data of three test flights were used to determine the accuracy of the device, which turned out to be better than 1° for roll and pitch (mean between 0.0 and 0.21 with a standard deviation of 0.17–0.46 and better than 2.5° for yaw angles (mean between 0.0 and −0.14 with a standard deviation of 0.58–0.94. This turned out to be sufficient to enable a fast and almost automatic GIS-based archiving of all of the imagery.

  8. [Medical imaging: its medical economics and recent situation in Japan.].

    Science.gov (United States)

    Imai, Keiko

    2006-01-01

    Two fields of radiology, medical imaging and radiation therapy, are coded separately in medical fee system, and the health care statistics of 2003 shows that expenditure on the former was 5.2% of the whole medical cost and the latter 0.28%. Introduction of DPC, an abbreviation of Diagnostic Procedure Combination, was carried out in 2003, which was an essential reform of medical fee payment system that have been managed on fee-for-service base throughout, and 22% of beds for acute patients care are under the control of DPC payment in 2006. As medical imaging procedures are basically classified in inclusive payment in DPC system, their accurate statistics cannot be figured out because of the lack of description of individual procedures in DPC bills. Policy-making of medical economics will suffer a great loss from the deficiency of detailed data in published statistics. Important role in clinical diagnoses of CT and MR results an increase of fee paid for them up to more than half of total expenditure on medical imaging. So, dominant reduction of examination fee has been done for MR imaging, especially in 2002, to reduce the total cost of medical imaging. Follows could be featured as major topics of medical imaging in health insurance system, (a) fee is newly assigned for electronic handling of CT-and-MR images, and nuclear medicine, and (b) there is still a mismatch between actual payment and quality of medical facilities. As matters related to medical imaging, the followings should be stressed; (a) numbers of CT and MR units per population are dominantly high among OECD countries, but, those controlled by qualified radiologists are at the average level of those countries, (b) there is a big difference of MR examination quality among medical facilities, and (c) 76% of newly-installed high-end MR units are supplied by foreign industries. Hopefully, there will be an increase in the concern to medical fee payment system and health care cost because they possibly

  9. Automated image-based tracking and its application in ecology.

    Science.gov (United States)

    Dell, Anthony I; Bender, John A; Branson, Kristin; Couzin, Iain D; de Polavieja, Gonzalo G; Noldus, Lucas P J J; Pérez-Escudero, Alfonso; Perona, Pietro; Straw, Andrew D; Wikelski, Martin; Brose, Ulrich

    2014-07-01

    The behavior of individuals determines the strength and outcome of ecological interactions, which drive population, community, and ecosystem organization. Bio-logging, such as telemetry and animal-borne imaging, provides essential individual viewpoints, tracks, and life histories, but requires capture of individuals and is often impractical to scale. Recent developments in automated image-based tracking offers opportunities to remotely quantify and understand individual behavior at scales and resolutions not previously possible, providing an essential supplement to other tracking methodologies in ecology. Automated image-based tracking should continue to advance the field of ecology by enabling better understanding of the linkages between individual and higher-level ecological processes, via high-throughput quantitative analysis of complex ecological patterns and processes across scales, including analysis of environmental drivers.

  10. A cloud solution for medical image processing

    Directory of Open Access Journals (Sweden)

    Ali Mirarab,

    2014-07-01

    Full Text Available The rapid growth in the use of Electronic Health Records across the globe along with the rich mix of multimedia held within an EHR combined with the increasing level of detail due to advances in diagnostic medical imaging means increasing amounts of data can be stored for each patient. Also lack of image processing and analysis tools for handling the large image datasets has compromised researchers and practitioner‟s outcome. Migrating medical imaging applications and data to the Cloud can allow healthcare organizations to realize significant cost savings relating to hardware, software, buildings, power and staff, in addition to greater scalability, higher performance and resilience. This paper reviews medical image processing and its challenges, states cloud computing and cloud computing benefits due to medical image processing. Also, this paper introduces tools and methods for medical images processing using the cloud. Finally a method is provided for medical images processing based on Eucalyptus cloud infrastructure with image processing software “ImageJ” and using improved genetic algorithm for the allocation and distribution of resources. Based on conducted simulations and experimental results, the proposed method brings high scalability, simplicity, flexibility and fully customizability in addition to 40% cost reduction and twice increase in speed.

  11. A survey of medical diagnostic imaging technologies

    Energy Technology Data Exchange (ETDEWEB)

    Heese, V.; Gmuer, N.; Thomlinson, W.

    1991-10-01

    The fields of medical imaging and medical imaging instrumentation are increasingly important. The state-of-the-art continues to advance at a very rapid pace. In fact, various medical imaging modalities are under development at the National Synchrotron Light Source (such as MECT and Transvenous Angiography.) It is important to understand how these techniques compare with today's more conventional imaging modalities. The purpose of this report is to provide some basic information about the various medical imaging technologies currently in use and their potential developments as a basis for this comparison. This report is by no means an in-depth study of the physics and instrumentation of the various imaging modalities; instead, it is an attempt to provide an explanation of the physical bases of these techniques and their principal clinical and research capabilities.

  12. A survey of medical diagnostic imaging technologies

    Energy Technology Data Exchange (ETDEWEB)

    Heese, V.; Gmuer, N.; Thomlinson, W.

    1991-10-01

    The fields of medical imaging and medical imaging instrumentation are increasingly important. The state-of-the-art continues to advance at a very rapid pace. In fact, various medical imaging modalities are under development at the National Synchrotron Light Source (such as MECT and Transvenous Angiography.) It is important to understand how these techniques compare with today`s more conventional imaging modalities. The purpose of this report is to provide some basic information about the various medical imaging technologies currently in use and their potential developments as a basis for this comparison. This report is by no means an in-depth study of the physics and instrumentation of the various imaging modalities; instead, it is an attempt to provide an explanation of the physical bases of these techniques and their principal clinical and research capabilities.

  13. Automated Pointing of Cardiac Imaging Catheters.

    Science.gov (United States)

    Loschak, Paul M; Brattain, Laura J; Howe, Robert D

    2013-12-31

    Intracardiac echocardiography (ICE) catheters enable high-quality ultrasound imaging within the heart, but their use in guiding procedures is limited due to the difficulty of manually pointing them at structures of interest. This paper presents the design and testing of a catheter steering model for robotic control of commercial ICE catheters. The four actuated degrees of freedom (4-DOF) are two catheter handle knobs to produce bi-directional bending in combination with rotation and translation of the handle. An extra degree of freedom in the system allows the imaging plane (dependent on orientation) to be directed at an object of interest. A closed form solution for forward and inverse kinematics enables control of the catheter tip position and the imaging plane orientation. The proposed algorithms were validated with a robotic test bed using electromagnetic sensor tracking of the catheter tip. The ability to automatically acquire imaging targets in the heart may improve the efficiency and effectiveness of intracardiac catheter interventions by allowing visualization of soft tissue structures that are not visible using standard fluoroscopic guidance. Although the system has been developed and tested for manipulating ICE catheters, the methods described here are applicable to any long thin tendon-driven tool (with single or bi-directional bending) requiring accurate tip position and orientation control.

  14. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  15. Automated thresholding in radiographic image for welded joints

    Science.gov (United States)

    Yazid, Haniza; Arof, Hamzah; Yazid, Hafizal

    2012-03-01

    Automated detection of welding defects in radiographic images becomes non-trivial when uneven illumination, contrast and noise are present. In this paper, a new surface thresholding method is introduced to detect defects in radiographic images of welding joints. In the first stage, several image processing techniques namely fuzzy c means clustering, region filling, mean filtering, edge detection, Otsu's thresholding and morphological operations method are utilised to locate the area in which defects might exist. This is followed by the implementation of inverse surface thresholding with partial differential equation to locate isolated areas that represent the defects in the second stage. The proposed method obtained a promising result with high precision.

  16. SAND: Automated VLBI imaging and analyzing pipeline

    Science.gov (United States)

    Zhang, Ming

    2016-05-01

    The Search And Non-Destroy (SAND) is a VLBI data reduction pipeline composed of a set of Python programs based on the AIPS interface provided by ObitTalk. It is designed for the massive data reduction of multi-epoch VLBI monitoring research. It can automatically investigate calibrated visibility data, search all the radio emissions above a given noise floor and do the model fitting either on the CLEANed image or directly on the uv data. It then digests the model-fitting results, intelligently identifies the multi-epoch jet component correspondence, and recognizes the linear or non-linear proper motion patterns. The outputs including CLEANed image catalogue with polarization maps, animation cube, proper motion fitting and core light curves. For uncalibrated data, a user can easily add inline modules to do the calibration and self-calibration in a batch for a specific array.

  17. Medical imaging in new drug clinical development.

    Science.gov (United States)

    Wang, Yi-Xiang; Deng, Min

    2010-12-01

    Medical imaging can help answer key questions that arise during the drug development process. The role of medical imaging in new drug clinical trials includes identification of likely responders; detection and diagnosis of lesions and evaluation of their severity; and therapy monitoring and follow-up. Nuclear imaging techniques such as PET can be used to monitor drug pharmacokinetics and distribution and study specific molecular endpoints. In assessing drug efficacy, imaging biomarkers and imaging surrogate endpoints can be more objective and faster to measure than clinical outcomes, and allow small group sizes, quick results and good statistical power. Imaging also has important role in drug safety monitoring, particularly when there is no other suitable biomarkers available. Despite the long history of radiological sciences, its application to the drug development process is relatively recent. This review highlights the processes, opportunities, and challenges of medical imaging in new drug development.

  18. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  19. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  20. Practical approach to apply range image sensors in machine automation

    Science.gov (United States)

    Moring, Ilkka; Paakkari, Jussi

    1993-10-01

    In this paper we propose a practical approach to apply range imaging technology in machine automation. The applications we are especially interested in are industrial heavy-duty machines like paper roll manipulators in harbor terminals, harvesters in forests and drilling machines in mines. Characteristic of these applications is that the sensing system has to be fast, mid-ranging, compact, robust, and relatively cheap. On the other hand the sensing system is not required to be generic with respect to the complexity of scenes and objects or number of object classes. The key in our approach is that just a limited range data set or as we call it, a sparse range image is acquired and analyzed. This makes both the range image sensor and the range image analysis process more feasible and attractive. We believe that this is the way in which range imaging technology will enter the large industrial machine automation market. In the paper we analyze as a case example one of the applications mentioned and, based on that, we try to roughly specify the requirements for a range imaging based sensing system. The possibilities to implement the specified system are analyzed based on our own work on range image acquisition and interpretation.

  1. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    Science.gov (United States)

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes.

  2. New strategies for medical data mining, part 3: automated workflow analysis and optimization.

    Science.gov (United States)

    Reiner, Bruce

    2011-02-01

    The practice of evidence-based medicine calls for the creation of "best practice" guidelines, leading to improved clinical outcomes. One of the primary factors limiting evidence-based medicine in radiology today is the relative paucity of standardized databases. The creation of standardized medical imaging databases offer the potential to enhance radiologist workflow and diagnostic accuracy through objective data-driven analytics, which can be categorized in accordance with specific variables relating to the individual examination, patient, provider, and technology being used. In addition to this "global" database analysis, "individual" radiologist workflow can be analyzed through the integration of electronic auditing tools into the PACS. The combination of these individual and global analyses can ultimately identify best practice patterns, which can be adapted to the individual attributes of end users and ultimately used in the creation of automated evidence-based medicine workflow templates.

  3. Image registration method for medical image sequences

    Science.gov (United States)

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  4. Medical image libraries: ICoS project

    Science.gov (United States)

    Honniball, John; Thomas, Peter

    1999-08-01

    FOr use of digital techniques for the production, manipulation and storage of images has resulted in the creation of digital image libraries. These libraries often store many thousands of images. While provision of storage media for such large amounts of data has been straightforward, provision of effective searching and retrieval tools has not. Medicine relies heavily on images as a diagnostic tool. The most obvious example is the x-ray, but many other image forms are in everyday use. Advances in technology are affecting the ways medical images are generated, stored and retrieved. The paper describes the work of the Image COding and Segmentation to Support Variable Rate Transmission Channels and Variable Resolution Platforms (ICoS) research project currently under way in Bristol, UK. ICoS is a project of the Mobile of England and Hewlett-Packard Research Laboratories Europe. Funding is provided by the Engineering and PHysical Sciences Research Council. The aim of the ICoS project is to demonstrate the practical application of computer networking to medical image libraries. Work at the University of the West of England concentrates on user interface and indexing issues. Metadata is used to organize the images, coded using the WWW Consortium standard Resource Description Framework. We are investigating the application of such standards to medical images, one outcome being to implement a metadata-based image library. This paper describes the ICoS project in detail and discuses both metadata system and user interfaces in the context of medical applications.

  5. An efficient medical image compression scheme.

    Science.gov (United States)

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen

    2005-01-01

    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression.

  6. Blind integrity verification of medical images.

    Science.gov (United States)

    Huang, Hui; Coatrieux, Gouenou; Shu, Huazhong; Luo, Limin; Roux, Christian

    2012-11-01

    This work presents the first method of digital blind forensics within the medical imaging field with the objective to detect whether an image has been modified by some processing (e.g. filtering, lossy compression and so on). It compares two image features: the Histogram statistics of Reorganized Block-based Discrete cosine transform coefficients (HRBD), originally proposed for steganalysis purposes, and the Histogram statistics of Reorganized Block-based Tchebichef moments (HRBT). Both features serve as input of a set of SVM classifiers built in order to discriminate tampered images from original ones as well as to identify the nature of the global modification one image may have undergone. Performance evaluation, conducted in application to different medical image modalities, shows that these image features can help, independently or jointly, to blindly distinguish image processing or modifications with a detection rate greater than 70%. They also underline the complementarity of these features.

  7. Medical imaging technology reviews and computational applications

    CERN Document Server

    Dewi, Dyah

    2015-01-01

    This book presents the latest research findings and reviews in the field of medical imaging technology, covering ultrasound diagnostics approaches for detecting osteoarthritis, breast carcinoma and cardiovascular conditions, image guided biopsy and segmentation techniques for detecting lung cancer, image fusion, and simulating fluid flows for cardiovascular applications. It offers a useful guide for students, lecturers and professional researchers in the fields of biomedical engineering and image processing.

  8. Adaptive Beamforming for Medical Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund

    This dissertation investigates the application of adaptive beamforming for medical ultrasound imaging. The investigations have been concentrated primarily on the Minimum Variance (MV) beamformer. A broadband implementation of theMV beamformer is described, and simulated data have been used...... to demonstrate the performance. The MV beamformer has been applied to different sets of ultrasound imaging sequences; synthetic aperture ultrasound imaging and plane wave ultrasound imaging. And an approach for applying MV optimized apodization weights on both the transmitting and the receiving apertures...

  9. Image mosaicing for automated pipe scanning

    Science.gov (United States)

    Summan, Rahul; Dobie, Gordon; Guarato, Francesco; MacLeod, Charles; Marshall, Stephen; Forrester, Cailean; Pierce, Gareth; Bolton, Gary

    2015-03-01

    Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice.

  10. ENVISION, from particle detectors to medical imaging

    CERN Multimedia

    2013-01-01

    Technologies developed for particle physics detectors are increasingly used in medical imaging tools like Positron Emission Tomography (PET). Produced by: CERN KT/Life Sciences and ENVISION Project Management: Manuela Cirilli 3D animation: Jeroen Huijben, Nymus3d

  11. AUTOMATED IMAGE MATCHING WITH CODED POINTS IN STEREOVISION MEASUREMENT

    Institute of Scientific and Technical Information of China (English)

    Dong Mingli; Zhou Xiaogang; Zhu Lianqing; Lü Naiguang; Sun Yunan

    2005-01-01

    A coding-based method to solve the image matching problems in stereovision measurement is presented. The solution is to add and append an identity ID to the retro-reflect point, so it can be identified efficiently under the complicated circumstances and has the characteristics of rotation, zooming, and deformation independence. Its design architecture and implementation process in details based on the theory of stereovision measurement are described. The method is effective on reducing processing data time, improving accuracy of image matching and automation of measuring system through experiments.

  12. Special Issue on “Medical Imaging and Image Processing”

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-12-01

    Full Text Available Over the last decade, Medical Imaging has become an essential component in many fields of bio-medical research and clinical practice. Biologists study cells and generate 3D confocal microscopy data sets, virologists generate 3D reconstructions of viruses from micrographs, radiologists identify and quantify tumors from MRI and CT scans, and neuroscientists detect regional metabolic brain activity from PET and functional MRI scans. On the other hand, Image Processing includes the analysis, enhancement, and display of images captured via various medical imaging technologies. Image reconstruction and modeling techniques allow instant processing of 2D signals to create 3D images. In addition, image processing and analysis can be used to determine the diameter, volume, and vasculature of a tumor or organ, flow parameters of blood or other fluids, and microscopic changes that have not previously been discernible.[...

  13. GPU Accelerated Automated Feature Extraction From Satellite Images

    Directory of Open Access Journals (Sweden)

    K. Phani Tejaswi

    2013-04-01

    Full Text Available The availability of large volumes of remote sensing data insists on higher degree of automation in featureextraction, making it a need of thehour. Fusingdata from multiple sources, such as panchromatic,hyperspectraland LiDAR sensors, enhances the probability of identifying and extracting features such asbuildings, vegetation or bodies of water by using a combination of spectral and elevation characteristics.Utilizing theaforementioned featuresin remote sensing is impracticable in the absence ofautomation.Whileefforts are underway to reduce human intervention in data processing, this attempt alone may notsuffice. Thehuge quantum of data that needs to be processed entailsaccelerated processing to be enabled.GPUs, which were originally designed to provide efficient visualization,arebeing massively employed forcomputation intensive parallel processing environments. Image processing in general and hence automatedfeatureextraction, is highly computation intensive, where performance improvements have a direct impacton societal needs. In this context, an algorithm has been formulated for automated feature extraction froma panchromatic or multispectral image based on image processing techniques.Two Laplacian of Guassian(LoGmasks were applied on the image individually followed by detection of zero crossing points andextracting the pixels based on their standard deviationwiththe surrounding pixels. The two extractedimages with different LoG masks were combined together which resulted in an image withthe extractedfeatures and edges.Finally the user is at liberty to apply the image smoothing step depending on the noisecontent in the extracted image.The image ispassed through a hybrid median filter toremove the salt andpepper noise from the image.This paper discusses theaforesaidalgorithmforautomated featureextraction, necessity of deployment of GPUs for thesame;system-level challenges and quantifies thebenefits of integrating GPUs in such environment. The

  14. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  15. Medical image segmentation by MDP model

    Science.gov (United States)

    Lu, Yisu; Chen, Wufan

    2011-11-01

    MDP (Dirichlet Process Mixtures) model is applied to segment medical images in this paper. Segmentation can been automatically done without initializing segmentation class numbers. The MDP model segmentation algorithm is used to segment natural images and MR (Magnetic Resonance) images in the paper. To demonstrate the accuracy of the MDP model segmentation algorithm, many compared experiments, such as EM (Expectation Maximization) image segmentation algorithm, K-means image segmentation algorithm and MRF (Markov Field) image segmentation algorithm, have been done to segment medical MR images. All the methods are also analyzed quantitatively by using DSC (Dice Similarity Coefficients). The experiments results show that DSC of MDP model segmentation algorithm of all slices exceed 90%, which show that the proposed method is robust and accurate.

  16. MULTIWAVELET TRANSFORM IN COMPRESSION OF MEDICAL IMAGES

    Directory of Open Access Journals (Sweden)

    V. K. Sudha

    2013-05-01

    Full Text Available This paper analyses performance of multiwavelets - a variant of wavelet transform on compression of medical images. To do so, two processes namely, transformation for decorrelation and encoding are done. In transformation stage medical images are subjected to multiwavelet transform using multiwavelets such as Geronimo- Hardin-Massopust, Chui Lian, Cardinal 2 Balanced (Cardbal2 and orthogonal symmetric/antsymmetric multiwavelet (SA4. Set partitioned Embedded Block Coder is used as a common platform for encoding the transformed coefficients. Peak Signal to noise ratio, bit rate and Structural Similarity Index are used as metrics for performance analysis. For experiment we have used various medical images such as Magnetic Resonance Image, Computed Tomography and X-ray images.

  17. I2Cnet medical image annotation service.

    Science.gov (United States)

    Chronaki, C E; Zabulis, X; Orphanoudakis, S C

    1997-01-01

    I2Cnet (Image Indexing by Content network) aims to provide services related to the content-based management of images in healthcare over the World-Wide Web. Each I2Cnet server maintains an autonomous repository of medical images and related information. The annotation service of I2Cnet allows specialists to interact with the contents of the repository, adding comments or illustrations to medical images of interest. I2Cnet annotations may be communicated to other users via e-mail or posted to I2Cnet for inclusion in its local repositories. This paper discusses the annotation service of I2Cnet and argues that such services pave the way towards the evolution of active digital medical image libraries.

  18. Physics for Medical Imaging Applications

    CERN Document Server

    Caner, Alesssandra; Rahal, Ghita

    2007-01-01

    The book introduces the fundamental aspects of digital imaging and covers four main themes: Ultrasound techniques and imaging applications; Magnetic resonance and MPJ in hospital; Digital imaging with X-rays; and Emission tomography (PET and SPECT). Each of these topics is developed by analysing the underlying physics principles and their implementation, quality and safety aspects, clinical performance and recent advancements in the field. Some issues specific to the individual techniques are also treated, e.g. choice of radioisotopes or contrast agents, optimisation of data acquisition and st

  19. Medical imaging principles and practices

    CERN Document Server

    Bronzino, Joseph D; Peterson, Donald R

    2013-01-01

    This book offers a selective review of key imaging modalities focusing on modalities with established clinical utilization. It provides a detailed overview of x-ray imaging and computed tomography, fundamental concepts in signal acquisition and processes, followed by an overview of functional MRI (fMRI) and chemical shift imaging. It also covers topics in Magnetic Resonance Microcopy, the physics of instrumentation and signal collection, and their application in clinical practice. The selection of topics provides readers with an appreciation of the depth and breadth of the field and the challenges ahead of the technical and clinical community of researchers and practitioners.

  20. Intuitionistic fuzzy segmentation of medical images.

    Science.gov (United States)

    Chaira, Tamalika

    2010-06-01

    This paper proposes a novel and probably the first method, using Attanassov intuitionistic fuzzy set theory to segment blood vessels and also the blood cells in pathological images. This type of segmentation is very important in detecting different types of human diseases, e.g., an increase in the number of vessels may lead to cancer in prostates, mammary, etc. The medical images are not properly illuminated, and segmentation in that case becomes very difficult. A novel image segmentation approach using intuitionistic fuzzy set theory and a new membership function is proposed using restricted equivalence function from automorphisms, for finding the membership values of the pixels of the image. An intuitionistic fuzzy image is constructed using Sugeno type intuitionistic fuzzy generator. Local thresholding is applied to threshold medical images. The results showed a much better performance on poor contrast medical images, where almost all the blood vessels and blood cells are visible properly. There are several fuzzy and intuitionistic fuzzy thresholding methods, but these methods are not related to the medical images. To make a comparison with the proposed method with other thresholding methods, the method is compared with six nonfuzzy, fuzzy, and intuitionistic fuzzy methods.

  1. Segmentation of elongated structures in medical images

    NARCIS (Netherlands)

    Staal, Jozef Johannes

    2004-01-01

    The research described in this thesis concerns the automatic detection, recognition and segmentation of elongated structures in medical images. For this purpose techniques have been developed to detect subdimensional pointsets (e.g. ridges, edges) in images of arbitrary dimension. These pointsets ar

  2. Multi-channel medical imaging system

    Science.gov (United States)

    Frangioni, John V.

    2016-05-03

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  3. Multi-channel medical imaging system

    Energy Technology Data Exchange (ETDEWEB)

    Frangioni, John V.

    2016-05-03

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  4. Multi-channel medical imaging system

    Science.gov (United States)

    Frangioni, John V

    2013-12-31

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in the subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  5. An automated scanning system for particle physics and medical applications

    Energy Technology Data Exchange (ETDEWEB)

    De Lellis, Giovanni [Dipartimento di Fisica, Universita ' Federico II' di Napoli, Complesso Universitario Monte Sant' Angelo, via Cintia, 80126 Naples (Italy)], E-mail: giovanni.de.lellis@cern.ch

    2007-10-01

    In this paper we present the performance of a fully automated microscope aimed at very precise spatial and angular measurement with the nuclear emulsion technology. We show in particular its application to the study of the fragmentation of carbon ions used in the oncological hadrontherapy.

  6. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  7. Improved Interactive Medical-Imaging System

    Science.gov (United States)

    Ross, Muriel D.; Twombly, Ian A.; Senger, Steven

    2003-01-01

    An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.

  8. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS

    OpenAIRE

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-01-01

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and u...

  9. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-03-09

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

  10. A lossless encryption method for medical images using edge maps.

    Science.gov (United States)

    Zhou, Yicong; Panetta, Karen; Agaian, Sos

    2009-01-01

    Image encryption is an effective approach for providing security and privacy protection for medical images. This paper introduces a new lossless approach, called EdgeCrypt, to encrypt medical images using the information contained within an edge map. The algorithm can fully protect the selected objects/regions within medical images or the entire medical images. It can also encrypt other types of images such as grayscale images or color images. The algorithm can be used for privacy protection in the real-time medical applications such as wireless medical networking and mobile medical services.

  11. Use of mobile devices for medical imaging.

    Science.gov (United States)

    Hirschorn, David S; Choudhri, Asim F; Shih, George; Kim, Woojin

    2014-12-01

    Mobile devices have fundamentally changed personal computing, with many people forgoing the desktop and even laptop computer altogether in favor of a smaller, lighter, and cheaper device with a touch screen. Doctors and patients are beginning to expect medical images to be available on these devices for consultative viewing, if not actual diagnosis. However, this raises serious concerns with regard to the ability of existing mobile devices and networks to quickly and securely move these images. Medical images often come in large sets, which can bog down a network if not conveyed in an intelligent manner, and downloaded data on a mobile device are highly vulnerable to a breach of patient confidentiality should that device become lost or stolen. Some degree of regulation is needed to ensure that the software used to view these images allows all relevant medical information to be visible and manipulated in a clinically acceptable manner. There also needs to be a quality control mechanism to ensure that a device's display accurately conveys the image content without loss of contrast detail. Furthermore, not all mobile displays are appropriate for all types of images. The smaller displays of smart phones, for example, are not well suited for viewing entire chest radiographs, no matter how small and numerous the pixels of the display may be. All of these factors should be taken into account when deciding where, when, and how to use mobile devices for the display of medical images.

  12. Automated localization of vertebra landmarks in MRI images

    Science.gov (United States)

    Pai, Akshay; Narasimhamurthy, Anand; Rao, V. S. Veeravasarapu; Vaidya, Vivek

    2011-03-01

    The identification of key landmark points in an MR spine image is an important step for tasks such as vertebra counting. In this paper, we propose a template matching based approach for automatic detection of two key landmark points, namely the second cervical vertebra (C2) and the sacrum from sagittal MR images. The approach is comprised of an approximate localization of vertebral column followed by matching with appropriate templates in order to detect/localize the landmarks. A straightforward extension of the work described here is an automated classification of spine section(s). It also serves as a useful building block for further automatic processing such as extraction of regions of interest for subsequent image processing and also in aiding the counting of vertebra.

  13. Automated computational aberration correction method for broadband interferometric imaging techniques.

    Science.gov (United States)

    Pande, Paritosh; Liu, Yuan-Zhi; South, Fredrick A; Boppart, Stephen A

    2016-07-15

    Numerical correction of optical aberrations provides an inexpensive and simpler alternative to the traditionally used hardware-based adaptive optics techniques. In this Letter, we present an automated computational aberration correction method for broadband interferometric imaging techniques. In the proposed method, the process of aberration correction is modeled as a filtering operation on the aberrant image using a phase filter in the Fourier domain. The phase filter is expressed as a linear combination of Zernike polynomials with unknown coefficients, which are estimated through an iterative optimization scheme based on maximizing an image sharpness metric. The method is validated on both simulated data and experimental data obtained from a tissue phantom, an ex vivo tissue sample, and an in vivo photoreceptor layer of the human retina.

  14. Medical Image Feature, Extraction, Selection And Classification

    Directory of Open Access Journals (Sweden)

    M.VASANTHA,

    2010-06-01

    Full Text Available Breast cancer is the most common type of cancer found in women. It is the most frequent form of cancer and one in 22 women in India is likely to suffer from breast cancer. This paper proposes a image classifier to classify the mammogram images. Mammogram image is classified into normal image, benign image and malignant image. Totally 26 features including histogram intensity features and GLCM features are extracted from mammogram image. A hybrid approach of feature selection is proposed in this paper which reduces 75% of the features. Decision tree algorithms are applied to mammography lassification by using these reduced features. Experimental results have been obtained for a data set of 113 images taken from MIAS of different types. This technique of classification has not been attempted before and it reveals the potential of Data mining in medical treatment.

  15. An open data mining framework for the analysis of medical images: application on obstructive nephropathy microscopy images.

    Science.gov (United States)

    Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias

    2010-01-01

    This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

  16. Photoacoustic Imaging: Opening New Frontiers in Medical Imaging

    Directory of Open Access Journals (Sweden)

    Keerthi S Valluru

    2011-01-01

    Full Text Available In today′s world, technology is advancing at an exponential rate and medical imaging is no exception. During the last hundred years, the field of medical imaging has seen a tremendous technological growth with the invention of imaging modalities including but not limited to X-ray, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography, and single-photon emission computed tomography. These tools have led to better diagnosis and improved patient care. However, each of these modalities has its advantages as well as disadvantages and none of them can reveal all the information a physician would like to have. In the last decade, a new diagnostic technology called photoacoustic imaging has evolved which is moving rapidly from the research phase to the clinical trial phase. This article outlines the basics of photoacoustic imaging and describes our hands-on experience in developing a comprehensive photoacoustic imaging system to detect tissue abnormalities.

  17. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  18. Health Track System—An Automated Occupational Medical System

    OpenAIRE

    Compton, Jack E.; Hartridge, Anne D.; Maluish, Andrew G.

    1980-01-01

    The development of an automated occupational health and hazards system is being undertaken at the Department of Energy by Electronic Data Systems. This system, called the Health Track System (HTS), involves the integration and collection of data from the fields of occupational medicine, industrial hygiene, health physics, safety and personnel. This in itself is an exciting prospect, however, the scope of the system calls for it to be installed throughout DOE and contractor organizations acros...

  19. Nonreference Medical Image Edge Map Measure

    Directory of Open Access Journals (Sweden)

    Karen Panetta

    2014-01-01

    Full Text Available Edge detection is a key step in medical image processing. It is widely used to extract features, perform segmentation, and further assist in diagnosis. A poor quality edge map can result in false alarms and misses in cancer detection algorithms. Therefore, it is necessary to have a reliable edge measure to assist in selecting the optimal edge map. Existing reference based edge measures require a ground truth edge map to evaluate the similarity between the generated edge map and the ground truth. However, the ground truth images are not available for medical images. Therefore, a nonreference edge measure is ideal for medical image processing applications. In this paper, a nonreference reconstruction based edge map evaluation (NREM is proposed. The theoretical basis is that a good edge map keeps the structure and details of the original image thus would yield a good reconstructed image. The NREM is based on comparing the similarity between the reconstructed image with the original image using this concept. The edge measure is used for selecting the optimal edge detection algorithm and optimal parameters for the algorithm. Experimental results show that the quantitative evaluations given by the edge measure have good correlations with human visual analysis.

  20. Radiation biology of medical imaging

    CERN Document Server

    Kelsey, Charles A; Sandoval, Daniel J; Chambers, Gregory D; Adolphi, Natalie L; Paffett, Kimberly S

    2014-01-01

    This book provides a thorough yet concise introduction to quantitative radiobiology and radiation physics, particularly the practical and medical application. Beginning with a discussion of the basic science of radiobiology, the book explains the fast processes that initiate damage in irradiated tissue and the kinetic patterns in which such damage is expressed at the cellular level. The final section is presented in a highly practical handbook style and offers application-based discussions in radiation oncology, fractionated radiotherapy, and protracted radiation among others. The text is also supplemented by a Web site.

  1. Automated monitoring of activated sludge using image analysis

    OpenAIRE

    Motta, Maurício da; M. N. Pons; Roche, N; A.L. Amaral; Ferreira, E. C.; Alves, M.M.; Mota, M.; Vivier, H.

    2000-01-01

    An automated procedure for the characterisation by image analysis of the morphology of activated sludge has been used to monitor in a systematic manner the biomass in wastewater treatment plants. Over a period of one year, variations in terms mainly of the fractal dimension of flocs and of the amount of filamentous bacteria could be related to rain events affecting the plant influent flow rate and composition. Grand Nancy Council. Météo-France. Brasil. Ministério da Ciênc...

  2. Quantitative imaging features: extension of the oncology medical image database

    Science.gov (United States)

    Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.

    2015-03-01

    Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.

  3. Medical Image Registration and Surgery Simulation

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten

    1996-01-01

    This thesis explores the application of physical models in medical image registration and surgery simulation. The continuum models of elasticity and viscous fluids are described in detail, and this knowledge is used as a basis for most of the methods described here. Real-time deformable models...... growth is also presented. Using medical knowledge about the growth processes of the mandibular bone, a registration algorithm for time sequence images of the mandible is developed. Since this registration algorithm models the actual development of the mandible, it is possible to simulate the development......, and the use of selective matrix vector multiplication. Fluid medical image registration A new and faster algorithm for non-rigid registration using viscous fluid models is presented. This algorithm replaces the core part of the original algorithm with multi-resolution convolution using a new filter, which...

  4. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  5. Medical Image Steganography: Study of Medical Image Quality Degradation when Embedding Data in the Frequency Domain

    Directory of Open Access Journals (Sweden)

    M.I.Khalil

    2017-02-01

    Full Text Available Steganography is the discipline of invisible communication by hiding the exchanged secret information (message in another digital information media (image, video or audio. The existence of the message is kept indiscernible in sense that no one, other than the intended recipient, suspects the existence of the message. The majority of steganography techniques are implemented either in spatial domain or in frequency domain of the digital images while the embedded information can be in the form of plain or cipher message. Medical image steganography is classified as a distinctive case of image steganography in such a way that both the image and the embedded information have special requirements such as achieving utmost clarity reading of the medical images and the embedded messages. There is a contention between the amount of hidden information and the caused detectable distortion of image. The current paper studies the degradation of the medical image when undergoes the steganography process in the frequency domain.

  6. The impact of an automated dose-dispensing scheme on user compliance, medication understanding, and medication stockpiles

    DEFF Research Database (Denmark)

    Larsen, Anna Bira; Haugbølle, Lotte Stig

    2007-01-01

    BACKGROUND: It has been assumed that a new health technology, automated dose-dispensing (ADD), would result in benefits for medication users, including increased compliance, enhanced medication understanding, and improved safety. However, it was legislators and health professionals who pinpointed......' handling and consumption of medication in terms of compliance behavior, and how does the assumption of user benefits made by health professionals and legislators measure up to users' experiences with ADD? METHODS: The results built on a secondary analysis of 9 qualitative interviews with a varied selection...... the more frequent type of behavior. After switching to ADD, most users experienced no change in understanding of their medications. ADD did not lead to automatic removal of old medications in users' homes; in fact for some users, ADD led to even larger medication stockpiles. Overall, reports from patients...

  7. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  8. Automated classification of colon polyps in endoscopic image data

    Science.gov (United States)

    Gross, Sebastian; Palm, Stephan; Tischendorf, Jens J. W.; Behrens, Alexander; Trautwein, Christian; Aach, Til

    2012-03-01

    Colon cancer is the third most commonly diagnosed type of cancer in the US. In recent years, however, early diagnosis and treatment have caused a significant rise in the five year survival rate. Preventive screening is often performed by colonoscopy (endoscopic inspection of the colon mucosa). Narrow Band Imaging (NBI) is a novel diagnostic approach highlighting blood vessel structures on polyps which are an indicator for future cancer risk. In this paper, we review our automated inter- and intra-observer independent system for the automated classification of polyps into hyperplasias and adenomas based on vessel structures to further improve the classification performance. To surpass the performance limitations we derive a novel vessel segmentation approach, extract 22 features to describe complex vessel topologies, and apply three feature selection strategies. Tests are conducted on 286 NBI images with diagnostically important and challenging polyps (10mm or smaller) taken from our representative polyp database. Evaluations are based on ground truth data determined by histopathological analysis. Feature selection by Simulated Annealing yields the best result with a prediction accuracy of 96.2% (sensitivity: 97.6%, specificity: 94.2%) using eight features. Future development aims at implementing a demonstrator platform to begin clinical trials at University Hospital Aachen.

  9. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  10. Image Processing in Intelligent Medical Robotic Systems

    Directory of Open Access Journals (Sweden)

    Shashev Dmitriy

    2016-01-01

    Full Text Available The paper deals with the use of high-performance computing systems with the parallel-operation architecture in intelligent medical systems, such as medical robotic systems, based on a computer vision system, is an automatic control system with the strict requirements, such as high reliability, accuracy and speed of performance. It shows the basic block-diagram of an automatic control system based on a computer vision system. The author considers the possibility of using a reconfigurable computing environment in such systems. The design principles of the reconfigurable computing environment allows to improve a reliability, accuracy and performance of whole system many times. The article contains the brief overview and the theory of the research, demonstrates the use of reconfigurable computing environments for the image preprocessing, namely morphological image processing operations. Present results of the successful simulation of the reconfigurable computing environment and implementation of the morphological image processing operations on the test image in the MATLAB Simulink.

  11. Deep learning for automated skeletal bone age assessment in X-ray images.

    Science.gov (United States)

    Spampinato, C; Palazzo, S; Giordano, D; Aldinucci, M; Leonardi, R

    2017-02-01

    Skeletal bone age assessment is a common clinical practice to investigate endocrinology, genetic and growth disorders in children. It is generally performed by radiological examination of the left hand by using either the Greulich and Pyle (G&P) method or the Tanner-Whitehouse (TW) one. However, both clinical procedures show several limitations, from the examination effort of radiologists to (most importantly) significant intra- and inter-operator variability. To address these problems, several automated approaches (especially relying on the TW method) have been proposed; nevertheless, none of them has been proved able to generalize to different races, age ranges and genders. In this paper, we propose and test several deep learning approaches to assess skeletal bone age automatically; the results showed an average discrepancy between manual and automatic evaluation of about 0.8 years, which is state-of-the-art performance. Furthermore, this is the first automated skeletal bone age assessment work tested on a public dataset and for all age ranges, races and genders, for which the source code is available, thus representing an exhaustive baseline for future research in the field. Beside the specific application scenario, this paper aims at providing answers to more general questions about deep learning on medical images: from the comparison between deep-learned features and manually-crafted ones, to the usage of deep-learning methods trained on general imagery for medical problems, to how to train a CNN with few images.

  12. Guidelines for Managing the Implementation of Automated Medical Systems

    OpenAIRE

    Brown, Bob; Harbort, Bob; Kaplan, Bonnie; Maxwell, Joseph

    1981-01-01

    The nontechnical aspects of medical computing systems are as important as the technical. We offer some reasons why. We discuss the social context of systems, the human engineering necessary for medical systems to work, and the specifics of the man machine interface. We offer some suggestions for planning for the change which comes with any new method. Finally, we take a step that has often been neglected in such “how to do it” approaches and describe a theoretical framework for understanding ...

  13. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  14. Automated segmentation of three-dimensional MR brain images

    Science.gov (United States)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  15. Medical image segmentation using improved FCM

    Institute of Scientific and Technical Information of China (English)

    ZHANG XiaoFeng; ZHANG CaiMing; TANG WenJing; WEI ZhenWen

    2012-01-01

    Image segmentation is one of the most important problems in medical image processing,and the existence of partial volume effect and other phenomena makes the problem much more complex. Fuzzy Cmeans,as an effective tool to deal with PVE,however,is faced with great challenges in efficiency.Aiming at this,this paper proposes one improved FCM algorithm based on the histogram of the given image,which will be denoted as HisFCM and divided into two phases.The first phase will retrieve several intervals on which to compute cluster centroids,and the second one will perform image segmentation based on improved FCM algorithm.Compared with FCM and other improved algorithms,HisFCM is of much higher efficiency with satisfying results.Experiments on medical images show that HisFCM can achieve good segmentation results in less than 0.1 second,and can satisfy real-time requirements of medical image processing.

  16. An automated deformable image registration evaluation of confidence tool

    Science.gov (United States)

    Kirby, Neil; Chen, Josephine; Kim, Hojin; Morin, Olivier; Nie, Ke; Pouliot, Jean

    2016-04-01

    Deformable image registration (DIR) is a powerful tool for radiation oncology, but it can produce errors. Beyond this, DIR accuracy is not a fixed quantity and varies on a case-by-case basis. The purpose of this study is to explore the possibility of an automated program to create a patient- and voxel-specific evaluation of DIR accuracy. AUTODIRECT is a software tool that was developed to perform this evaluation for the application of a clinical DIR algorithm to a set of patient images. In brief, AUTODIRECT uses algorithms to generate deformations and applies them to these images (along with processing) to generate sets of test images, with known deformations that are similar to the actual ones and with realistic noise properties. The clinical DIR algorithm is applied to these test image sets (currently 4). From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. In this study, four commercially available DIR algorithms were used to deform a dose distribution associated with a virtual pelvic phantom image set, and AUTODIRECT was used to generate dose uncertainty estimates for each deformation. The virtual phantom image set has a known ground-truth deformation, so the true dose-warping errors of the DIR algorithms were also known. AUTODIRECT predicted error patterns that closely matched the actual error spatial distribution. On average AUTODIRECT overestimated the magnitude of the dose errors, but tuning the AUTODIRECT algorithms should improve agreement. This proof-of-principle test demonstrates the potential for the AUTODIRECT algorithm as an empirical method to predict DIR errors.

  17. [Promoting "well-treatment" in medical imaging].

    Science.gov (United States)

    Renouf, Nicole; Llop, Marc

    2012-12-01

    A project to promote "well-treatment" has been initiated in the medical imaging department of a Parisian hospital. With the aim of promoting the well-being of the patient and developing shared values of empathy and respect, the members of this medico-technical team have undertaken to build a culture of "well-treatment" which respects the patient's dignity and rights.

  18. APES Beamforming Applied to Medical Ultrasound Imaging

    DEFF Research Database (Denmark)

    Blomberg, Ann E. A.; Holfort, Iben Kraglund; Austeng, Andreas

    2009-01-01

    Recently, adaptive beamformers have been introduced to medical ultrasound imaging. The primary focus has been on the minimum variance (MV) (or Capon) beamformer. This work investigates an alternative but closely related beamformer, the Amplitude and Phase Estimation (APES) beamformer. APES offers...

  19. Gestalt descriptions embodiments and medical image interpretation

    DEFF Research Database (Denmark)

    Friis, Jan Kyrre Berg Olsen

    2016-01-01

    In this paper I will argue that medical specialists interpret and diagnose through technological mediations like X-ray and fMRI images, and by actualizing embodied skills tacitly they are determining the identity of objects in the perceptual field. The initial phase of human interpretation of vis...

  20. Lesion Contrast Enhancement in Medical Ultrasound Imaging

    DEFF Research Database (Denmark)

    Stetson, Paul F.; Sommer, F.G.; Macovski, A.

    1997-01-01

    Methods for improving the contrast-to-noise ratio (CNR) of low-contrast lesions in medical ultrasound imaging are described. Differences in the frequency spectra and amplitude distributions of the lesion and its surroundings can be used to increase the CNR of the lesion relative to the background...

  1. Medical image registration using sparse coding of image patches.

    Science.gov (United States)

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.

  2. Model observers in medical imaging research.

    Science.gov (United States)

    He, Xin; Park, Subok

    2013-10-04

    Model observers play an important role in the optimization and assessment of imaging devices. In this review paper, we first discuss the basic concepts of model observers, which include the mathematical foundations and psychophysical considerations in designing both optimal observers for optimizing imaging systems and anthropomorphic observers for modeling human observers. Second, we survey a few state-of-the-art computational techniques for estimating model observers and the principles of implementing these techniques. Finally, we review a few applications of model observers in medical imaging research.

  3. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  4. An investigation of image compression on NIIRS rating degradation through automated image analysis

    Science.gov (United States)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  5. Scale-Specific Multifractal Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    Boris Braverman

    2013-01-01

    irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer. Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value.

  6. Automated in situ brain imaging for mapping the Drosophila connectome.

    Science.gov (United States)

    Lin, Chi-Wen; Lin, Hsuan-Wen; Chiu, Mei-Tzu; Shih, Yung-Hsin; Wang, Ting-Yuan; Chang, Hsiu-Ming; Chiang, Ann-Shyn

    2015-01-01

    Mapping the connectome, a wiring diagram of the entire brain, requires large-scale imaging of numerous single neurons with diverse morphology. It is a formidable challenge to reassemble these neurons into a virtual brain and correlate their structural networks with neuronal activities, which are measured in different experiments to analyze the informational flow in the brain. Here, we report an in situ brain imaging technique called Fly Head Array Slice Tomography (FHAST), which permits the reconstruction of structural and functional data to generate an integrative connectome in Drosophila. Using FHAST, the head capsules of an array of flies can be opened with a single vibratome sectioning to expose the brains, replacing the painstaking and inconsistent brain dissection process. FHAST can reveal in situ brain neuroanatomy with minimal distortion to neuronal morphology and maintain intact neuronal connections to peripheral sensory organs. Most importantly, it enables the automated 3D imaging of 100 intact fly brains in each experiment. The established head model with in situ brain neuroanatomy allows functional data to be accurately registered and associated with 3D images of single neurons. These integrative data can then be shared, searched, visualized, and analyzed for understanding how brain-wide activities in different neurons within the same circuit function together to control complex behaviors.

  7. Automated pollen identification using microscopic imaging and texture analysis.

    Science.gov (United States)

    Marcos, J Víctor; Nava, Rodrigo; Cristóbal, Gabriel; Redondo, Rafael; Escalante-Ramírez, Boris; Bueno, Gloria; Déniz, Óscar; González-Porto, Amelia; Pardo, Cristina; Chung, François; Rodríguez, Tomás

    2015-01-01

    Pollen identification is required in different scenarios such as prevention of allergic reactions, climate analysis or apiculture. However, it is a time-consuming task since experts are required to recognize each pollen grain through the microscope. In this study, we performed an exhaustive assessment on the utility of texture analysis for automated characterisation of pollen samples. A database composed of 1800 brightfield microscopy images of pollen grains from 15 different taxa was used for this purpose. A pattern recognition-based methodology was adopted to perform pollen classification. Four different methods were evaluated for texture feature extraction from the pollen image: Haralick's gray-level co-occurrence matrices (GLCM), log-Gabor filters (LGF), local binary patterns (LBP) and discrete Tchebichef moments (DTM). Fisher's discriminant analysis and k-nearest neighbour were subsequently applied to perform dimensionality reduction and multivariate classification, respectively. Our results reveal that LGF and DTM, which are based on the spectral properties of the image, outperformed GLCM and LBP in the proposed classification problem. Furthermore, we found that the combination of all the texture features resulted in the highest performance, yielding an accuracy of 95%. Therefore, thorough texture characterisation could be considered in further implementations of automatic pollen recognition systems based on image processing techniques.

  8. Estimating fractal dimension of medical images

    Science.gov (United States)

    Penn, Alan I.; Loew, Murray H.

    1996-04-01

    Box counting (BC) is widely used to estimate the fractal dimension (fd) of medical images on the basis of a finite set of pixel data. The fd is then used as a feature to discriminate between healthy and unhealthy conditions. We show that BC is ineffective when used on small data sets and give examples of published studies in which researchers have obtained contradictory and flawed results by using BC to estimate the fd of data-limited medical images. We present a new method for estimating fd of data-limited medical images. In the new method, fractal interpolation functions (FIFs) are used to generate self-affine models of the underlying image; each model, upon discretization, approximates the original data points. The fd of each FIF is analytically evaluated. The mean of the fds of the FIFs is the estimate of the fd of the original data. The standard deviation of the fds of the FIFs is a confidence measure of the estimate. The goodness-of-fit of the discretized models to the original data is a measure of self-affinity of the original data. In a test case, the new method generated a stable estimate of fd of a rib edge in a standard chest x-ray; box counting failed to generate a meaningful estimate of the same image.

  9. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    Science.gov (United States)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  10. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  11. Medical Image Protection using steganography by crypto-image as cover Image

    Directory of Open Access Journals (Sweden)

    Vinay Pandey

    2012-09-01

    Full Text Available This paper presents securing the transmission of medical images. The presented algorithms will be applied to images. This work presents a new method that combines image cryptography, data hiding and Steganography technique for denoised and safe image transmission purpose. In This method we encrypt the original image with two shares mechanism encryption algorithm then embed the encrypted image with patient information by using lossless data embedding technique with data hiding method after that for more security. We apply steganography by encrypted image of any other medical image as cover image and embedded images as secrete image with the private key. In receiver side when the message is arrived then we apply the inverse methods in reverse order to get the original image and patient information and to remove noise we extract the image before the decryption of message. We have applied and showed the results of our method to medical images.

  12. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    Science.gov (United States)

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  13. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    Science.gov (United States)

    2015-03-01

    PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS

  14. Automated image analysis for space debris identification and astrometric measurements

    Science.gov (United States)

    Piattoni, Jacopo; Ceruti, Alessandro; Piergentili, Fabrizio

    2014-10-01

    The space debris is a challenging problem for the human activity in the space. Observation campaigns are conducted around the globe to detect and track uncontrolled space objects. One of the main problems in optical observation is obtaining useful information about the debris dynamical state by the images collected. For orbit determination, the most relevant information embedded in optical observation is the precise angular position, which can be evaluated by astrometry procedures, comparing the stars inside the image with star catalogs. This is typically a time consuming process, if done by a human operator, which makes this task impractical when dealing with large amounts of data, in the order of thousands images per night, generated by routinely conducted observations. An automated procedure is investigated in this paper that is capable to recognize the debris track inside a picture, calculate the celestial coordinates of the image's center and use these information to compute the debris angular position in the sky. This procedure has been implemented in a software code, that does not require human interaction and works without any supplemental information besides the image itself, detecting space objects and solving for their angular position without a priori information. The algorithm for object detection was developed inside the research team. For the star field computation, the software code astrometry.net was used and released under GPL v2 license. The complete procedure was validated by an extensive testing, using the images obtained in the observation campaign performed in a joint project between the Italian Space Agency (ASI) and the University of Bologna at the Broglio Space center, Kenya.

  15. Towards automated assistance for operating home medical devices.

    Science.gov (United States)

    Gao, Zan; Detyniecki, Marcin; Chen, Ming-Yu; Wu, Wen; Hauptmann, Alexander G; Wactlar, Howard D

    2010-01-01

    To detect errors when subjects operate a home medical device, we observe them with multiple cameras. We then perform action recognition with a robust approach to recognize action information based on explicitly encoding motion information. This algorithm detects interest points and encodes not only their local appearance but also explicitly models local motion. Our goal is to recognize individual human actions in the operations of a home medical device to see if the patient has correctly performed the required actions in the prescribed sequence. Using a specific infusion pump as a test case, requiring 22 operation steps from 6 action classes, our best classifier selects high likelihood action estimates from 4 available cameras, to obtain an average class recognition rate of 69%.

  16. Massive Medical Images Retrieval System Based on Hadoop

    Directory of Open Access Journals (Sweden)

    Qing-An YAO

    2014-02-01

    Full Text Available In order to improve the efficiency of massive medical images retrieval, against the defects of the single-node medical image retrieval system, a massive medical images retrieval system based on Hadoop is put forward. Brushlet transform and Local binary patterns algorithm are introduced firstly to extract characteristics of the medical example image, and store the image feature library in the HDFS. Then using the Map to match the example image features with the features in the feature library, while the Reduce to receive the calculation results of each Map task and ranking the results according to the size of the similarity. At the end, find the optimal retrieval results of the medical images according to the ranking results. The experimental results show that compared with other medical image retrieval systems, the Hadoop based medical image retrieval system can reduce the time of image storage and retrieval, and improve the image retrieval speed.

  17. MO-FG-303-04: A Smartphone Application for Automated Mechanical Quality Assurance of Medical Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H [Interdisciplinary Program in Radiation applied Life Science, College of Medicine, Seoul National University, Seoul (Korea, Republic of); Lee, H; Choi, K [Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Ye, S [Interdisciplinary Program in Radiation applied Life Science, College of Medicine, Seoul National University, Seoul (Korea, Republic of); Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2015-06-15

    Purpose: The mechanical quality assurance (QA) of medical accelerators consists of a time consuming series of procedures. Since most of the procedures are done manually – e.g., checking gantry rotation angle with the naked eye using a level attached to the gantry –, it is considered to be a process with high potential for human errors. To remove the possibilities of human errors and reduce the procedure duration, we developed a smartphone application for automated mechanical QA. Methods: The preparation for the automated process was done by attaching a smartphone to the gantry facing upward. For the assessments of gantry and collimator angle indications, motion sensors (gyroscope, accelerator, and magnetic field sensor) embedded in the smartphone were used. For the assessments of jaw position indicator, cross-hair centering, and optical distance indicator (ODI), an optical-image processing module using a picture taken by the high-resolution camera embedded in the smartphone was implemented. The application was developed with the Android software development kit (SDK) and OpenCV library. Results: The system accuracies in terms of angle detection error and length detection error were < 0.1° and < 1 mm, respectively. The mean absolute error for gantry and collimator rotation angles were 0.03° and 0.041°, respectively. The mean absolute error for the measured light field size was 0.067 cm. Conclusion: The automated system we developed can be used for the mechanical QA of medical accelerators with proven accuracy. For more convenient use of this application, the wireless communication module is under development. This system has a strong potential for the automation of the other QA procedures such as light/radiation field coincidence and couch translation/rotations.

  18. Automated microaneurysm detection algorithms applied to diabetic retinopathy retinal images

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2013-07-01

    Full Text Available Diabetic retinopathy is the commonest cause of blindness in working age people. It is characterised and graded by the development of retinal microaneurysms, haemorrhages and exudates. The damage caused by diabetic retinopathy can be prevented if it is treated in its early stages. Therefore, automated early detection can limit the severity of the disease, improve the follow-up management of diabetic patients and assist ophthalmologists in investigating and treating the disease more efficiently. This review focuses on microaneurysm detection as the earliest clinically localised characteristic of diabetic retinopathy, a frequently observed complication in both Type 1 and Type 2 diabetes. Algorithms used for microaneurysm detection from retinal images are reviewed. A number of features used to extract microaneurysm are summarised. Furthermore, a comparative analysis of reported methods used to automatically detect microaneurysms is presented and discussed. The performance of methods and their complexity are also discussed.

  19. Image auto-zoom technology for AFM automation

    Institute of Scientific and Technical Information of China (English)

    LIU Wen-liang; QIAN Jian-qiang; LI Yuan

    2009-01-01

    For the case of atomic force microscope (AFM) automation, we extract the most valuable sub-region of a given AFM image automatically for succeeding scanning to get the higher resolution of interesting region. Two objective functions are sum-marized based on the analysis of evaluation of the information of a sub-region, and corresponding algorithm principles based on standard deviation and Discrete Cosine Transform (DCT) compression are determined from math. Algorithm realizations are analyzed and two select patterns of sub-region: fixed grid mode and sub-region walk mode are compared. To speed up the algorithm of DCT compression which is too slow to practical applied, a new algorithm is proposed based on analysis of DCT's block computing feature, and it can perform hundreds times faster than original. Implementation result of the algorithms proves that this technology can be applied to the AFM automatic operation. Finally the difference between the two objective functions is discussed with detail computations.

  20. Automated patient and medication payment method for clinical trials

    Directory of Open Access Journals (Sweden)

    Yawn BP

    2013-01-01

    Full Text Available Barbara P Yawn,1 Suzanne Madison,1 Susan Bertram,1 Wilson D Pace,2 Anne Fuhlbrigge,3 Elliot Israel,3 Dawn Littlefield,1 Margary Kurland,1 Michael E Wechsler41Olmsted Medical Center, Department of Research, Rochester, MN, 2UCDHSC, Department of Family Medicine, University of Colorado Health Science Centre, Aurora, CO, 3Brigham and Women's Hospital, Pulmonary and Critical Care Division, Boston, MA, 4National Jewish Medical Center, Division of Pulmonology, Denver, CO, USABackground: Published reports and studies related to patient compensation for clinical trials focus primarily on the ethical issues related to appropriate amounts to reimburse for patient's time and risk burden. Little has been published regarding the method of payment for patient participation. As clinical trials move into widely dispersed community practices and more complex designs, the method of payment also becomes more complex. Here we review the decision process and payment method selected for a primary care-based randomized clinical trial of asthma management in Black Americans.Methods: The method selected is a credit card system designed specifically for clinical trials that allows both fixed and variable real-time payments. We operationalized the study design by providing each patient with two cards, one for reimbursement for study visits and one for payment of medication costs directly to the pharmacies.Results: Of the 1015 patients enrolled, only two refused use of the ClinCard, requesting cash payments for visits and only rarely a weekend or fill-in pharmacist refused to use the card system for payment directly to the pharmacy. Overall, the system has been well accepted by patients and local study teams. The ClinCard administrative system facilitates the fiscal accounting and medication adherence record-keeping by the central teams. Monthly fees are modest, and all 12 study institutional review boards approved use of the system without concern for patient

  1. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging

    Science.gov (United States)

    Jenkins, Cesare H.; Naczynski, Dominik J.; Yu, Shu-Jung S.; Yang, Yong; Xing, Lei

    2016-09-01

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system’s unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  2. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    Science.gov (United States)

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  3. Automated processing of webcam images for phenological classification.

    Science.gov (United States)

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software

  4. Automated determination of spinal centerline in CT and MR images

    Science.gov (United States)

    Štern, Darko; Vrtovec, Tomaž; Pernuš, Franjo; Likar, Boštjan

    2009-02-01

    The spinal curvature is one of the most important parameters for the evaluation of spinal deformities. The spinal centerline, represented by the curve that passes through the centers of the vertebral bodies in three-dimensions (3D), allows valid quantitative measurements of the spinal curvature at any location along the spine. We propose a novel automated method for the determination of the spinal centerline in 3D spine images. Our method exploits the anatomical property that the vertebral body walls are cylindrically-shaped and therefore the lines normal to the edges of the vertebral body walls most often intersect in the middle of the vertebral bodies, i.e. at the location of spinal centerline. These points of intersection are first obtained by a novel algorithm that performs a selective search in the directions normal to the edges of the structures and then connected with a parametric curve that represents the spinal centerline in 3D. As the method is based on anatomical properties of the 3D spine anatomy, it is modality-independent, i.e. applicable to images obtained by computed tomography (CT) and magnetic resonance (MR). The proposed method was evaluated on six CT and four MR images (T1- and T2-weighted) of normal spines and on one scoliotic CT spine image. The qualitative and quantitative results for the normal spines show that the spinal centerline can be successfully determined in both CT and MR spine images, while the results for the scoliotic spine indicate that the method may also be used to evaluate pathological curvatures.

  5. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  6. HEP technologies to address medical imaging challenges

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Developments in detector technologies aimed at solving challenges in present and future CERN experiments, particularly at the LHC, have triggered exceptional advances in the performance of medical imaging devices, allowing for a spectacular progress in in-vivo molecular imaging procedures, which are opening the way for tailored therapies of major diseases. This talk will briefly review the recent history of this prime example of technology transfer from HEP experiments to society, will describe the technical challenges being addressed by some ongoing projects, and will present a few new ideas for further developments and their foreseeable impact.

  7. Instrumentation of the ESRF medical imaging facility

    CERN Document Server

    Elleaume, H; Berkvens, P; Berruyer, G; Brochard, T; Dabin, Y; Domínguez, M C; Draperi, A; Fiedler, S; Goujon, G; Le Duc, G; Mattenet, M; Nemoz, C; Pérez, M; Renier, M; Schulze, C; Spanne, P; Suortti, P; Thomlinson, W; Estève, F; Bertrand, B; Le Bas, J F

    1999-01-01

    At the European Synchrotron Radiation Facility (ESRF) a beamport has been instrumented for medical research programs. Two facilities have been constructed for alternative operation. The first one is devoted to medical imaging and is focused on intravenous coronary angiography and computed tomography (CT). The second facility is dedicated to pre-clinical microbeam radiotherapy (MRT). This paper describes the instrumentation for the imaging facility. Two monochromators have been designed, both are based on bent silicon crystals in the Laue geometry. A versatile scanning device has been built for pre-alignment and scanning of the patient through the X-ray beam in radiography or CT modes. An intrinsic germanium detector is used together with large dynamic range electronics (16 bits) to acquire the data. The beamline is now at the end of its commissioning phase; intravenous coronary angiography is intended to start in 1999 with patients and the CT pre-clinical program is underway on small animals. The first in viv...

  8. Survey on Digital Watermarking on Medical Images

    Directory of Open Access Journals (Sweden)

    Kavitha K J

    2013-12-01

    Full Text Available The rapid growth in information and communication technologies has advances the medical data management systems immensely. In this regard, many different techniques and also the advanced equipment like Magnetic Resonance Imaging (MRI Scanner, Computer Tomography (CT scanner, Positron Emission of Tomography (PET, mammography, ultrasound, radiography etc. are used. Nowadays there is a rise of various diseases, for which several diagnoses are insufficient; therefore to achieve a correct diagnostic, there is need to exchange the data over Internet, but the main problem is while exchanging the data over Internet, we need to maintain their authenticity, integrity and confidentiality. Therefore, we need a system for effective storage, transmission, controlled manipulation and access of medical data keeping its authenticity, integrity and confidentiality. In this article, we discuss various water marking techniques used for effective storage, transmission, controlled manipulation and access of medical data keeping its authenticity, integrity and confidentiality.

  9. Machine learning for medical images analysis.

    Science.gov (United States)

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods.

  10. CERN crystals used in medical imaging

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    This crystal is a type of material known as a scintillator. When a high energy charged particle or photon passes through a scintillator it glows. These materials are widely used in particle physics for particle detection, but their uses are being realized in further fields, such as Positron Emission Tomography (PET), an area of medical imaging that monitors the regions of energy use in the body.

  11. An Improved Medical Image Fusion Algorithm for Anatomical and Functional Medical Images

    Institute of Scientific and Technical Information of China (English)

    CHEN Mei-ling; TAO Ling; QIAN Zhi-yu

    2009-01-01

    In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.

  12. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment

    Directory of Open Access Journals (Sweden)

    Meng Kuan eLin

    2013-07-01

    Full Text Available Digital Imaging Processing (DIP requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and digital imaging processing service, called M-DIP. The objective of the system is to (1 automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC, Neuroimaging Informatics Technology Initiative (NIFTI to RAW formats; (2 speed up querying of imaging measurement; and (3 display high level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle- layer database, a stand-alone DIP server and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data a multiple zoom levels and to increase its quality to meet users expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  13. A multiple-drawer medication layout problem in automated dispensing cabinets.

    Science.gov (United States)

    Pazour, Jennifer A; Meller, Russell D

    2012-12-01

    In this paper we investigate the problem of locating medications in automated dispensing cabinets (ADCs) to minimize human selection errors. We formulate the multiple-drawer medication layout problem and show that the problem can be formulated as a quadratic assignment problem. As a way to evaluate various medication layouts, we develop a similarity rating for medication pairs. To solve industry-sized problem instances, we develop a heuristic approach. We use hospital ADC transaction data to conduct a computational experiment to test the performance of our developed heuristics, to demonstrate how our approach can aid in ADC design trade-offs, and to illustrate the potential improvements that can be made when applying an analytical process to the multiple-drawer medication layout problem. Finally, we present conclusions and future research directions.

  14. Automated indexing of Laue images from polycrystalline materials

    Energy Technology Data Exchange (ETDEWEB)

    Chung, J.S.; Ice, G.E. [Oak Ridge National Lab., TN (United States). Metals and Ceramics Div.

    1998-12-31

    Third generation hard x-ray synchrotron sources and new x-ray optics have revolutionized x-ray microbeams. Now intense sub-micron x-ray beams are routinely available for x-ray diffraction measurement. An important application of sub-micron x-ray beams is analyzing polycrystalline material by measuring the diffraction of individual grains. For these measurements, conventional analysis methods will not work. The most suitable method for microdiffraction on polycrystalline samples is taking broad-bandpass or white-beam Laue images. With this method, the crystal orientation and non-isostatic strain can be measured rapidly without rotation of sample or detector. The essential step is indexing the reflections from more than one grain. An algorithm has recently been developed to index broad bandpass Laue images from multi-grain samples. For a single grain, a unique set of indices is found by comparing measured angles between Laue reflections and angles between possible indices derived from the x-ray energy bandpass and the scattering angle 2 theta. This method has been extended to multigrain diffraction by successively indexing points not recognized in preceding indexing iterations. This automated indexing method can be used in a wide range of applications.

  15. Fuzzy Wavenet (FWN classifier for medical images

    Directory of Open Access Journals (Sweden)

    Entather Mahos

    2005-01-01

    Full Text Available The combination of wavelet theory and neural networks has lead to the development of wavelet networks. Wavelet networks are feed-forward neural networks using wavelets as activation function. Wavelets networks have been used in classification and identification problems with some success. In this work we proposed a fuzzy wavenet network (FWN, which learns by common back-propagation algorithm to classify medical images. The library of medical image has been analyzed, first. Second, Two experimental tables’ rules provide an excellent opportunity to test the ability of fuzzy wavenet network due to the high level of information variability often experienced with this type of images. We have known that the wavelet transformation is more accurate in small dimension problem. But image processing is large dimension problem then we used neural network. Results are presented on the application on the three layer fuzzy wavenet to vision system. They demonstrate a considerable improvement in performance by proposed two table’s rule for fuzzy and deterministic dilation and translation in wavelet transformation techniques.

  16. Automated Detection of Sepsis Using Electronic Medical Record Data: A Systematic Review.

    Science.gov (United States)

    Despins, Laurel A

    2016-09-13

    Severe sepsis and septic shock are global issues with high mortality rates. Early recognition and intervention are essential to optimize patient outcomes. Automated detection using electronic medical record (EMR) data can assist this process. This review describes automated sepsis detection using EMR data. PubMed retrieved publications between January 1, 2005 and January 31, 2015. Thirteen studies met study criteria: described an automated detection approach with the potential to detect sepsis or sepsis-related deterioration in real or near-real time; focused on emergency department and hospitalized neonatal, pediatric, or adult patients; and provided performance measures or results indicating the impact of automated sepsis detection. Detection algorithms incorporated systemic inflammatory response and organ dysfunction criteria. Systems in nine studies generated study or care team alerts. Care team alerts did not consistently lead to earlier interventions. Earlier interventions did not consistently translate to improved patient outcomes. Performance measures were inconsistent. Automated sepsis detection is potentially a means to enable early sepsis-related therapy but current performance variability highlights the need for further research.

  17. Multiphase Systems for Medical Image Region Classification

    Science.gov (United States)

    Garamendi, J. F.; Malpica, N.; Schiavi, E.

    2009-05-01

    Variational methods for region classification have shown very promising results in medical image analysis. The Chan-Vese model is one of the most popular methods, but its numerical resolution is slow and it has serious drawbacks for most multiphase applications. In this work, we extend the link, stablished by Chambolle, between the two classes binary Chan-Vese model and the Rudin-Osher-Fatemi (ROF) model to a multiphase four classes minimal partition problem. We solve the ROF image restoration model and then we threshold the image by means of a genetic algorithm. This strategy allows for a more efficient algorithm due to the fact that only one well posed elliptic problem is solved instead of solving the coupled parabolic equations arising in the original multiphase Chan-Vese model.

  18. Survey: interpolation methods in medical image processing.

    Science.gov (United States)

    Lehmann, T M; Gönner, C; Spitzer, K

    1999-11-01

    Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 x 1 up to 8 x 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6 x 6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N = 6 and N = 8 supporting points. For quantitative error evaluations, a set of 50 direct digital X rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sinc interpolators, all kernels with N = 6 or larger sizes perform significantly better than N = 2 or N = 3 point methods (p cubic 6 x 6 interpolator with continuous second derivatives, as defined in (24), can be recommended for most common interpolation tasks. It appears to be the fastest

  19. Medical image information system 2001. Development of the medical image information system to risk management- Medical exposure management

    Energy Technology Data Exchange (ETDEWEB)

    Kuranishi, Makoto; Kumagai, Michitomo; Shintani, Mitsuo [Toyama Medical and Pharmaceutical Univ. (Japan). Hospital

    2000-12-01

    This paper discusses the methods and systems for optimizing the following supplements 10 and 17 for national health and medical care. The supplements 10 and 17 of DICOM (digital imaging and communications in medicine) system, which is now under progress for the purpose to keep compatibility within medical image information system as an international standard, are important for making the cooperation between HIS (hospital information system)/RIS (radiation information system) and modality (imaging instruments). Supplement 10 concerns the system to send the information of patients and their orders through HIS/RIS to modality and 17, the information of modality performed procedure step (MPPS) to HIS/RIS. The latter defines to document patients' exposure, a part of which has not been recognized in Japan. Thus the medical information system can be useful for risk-management of medical exposure in future. (K.H.)

  20. Lossless wavelet compression on medical image

    Science.gov (United States)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  1. Automated processing of webcam images for phenological classification

    Science.gov (United States)

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H.; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels’ time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software

  2. [Tattoos and medical imaging: issues and myths].

    Science.gov (United States)

    Kluger, Nicolas

    2014-05-01

    Tattooing is characterized by the introduction in the dermis of exogenous pigments to obtain a permanent design. Whether it is a traditional tattoo applied on the skin or a cosmetic one (permanent make-up), its prevalence has boomed for the past 20 years. The increased prevalence of tattooed patients along with medical progresses, in the field of therapeutics or diagnostic means have lead to the discovery of "new" complications and unexpected issues. Medical imaging world has also been affected by the tattoo craze. It has been approximately 20 years when the first issues related to tattooing and permanent make-up aroused. However, cautions and questions as well as anecdotal severe case reports have sometimes led to an over-exaggerated response by some physicians such as the systematic avoidance of RMN imaging for tattooed individuals. This review is intended to summarize the risks but also the "myths" associated with tattoo in the daily practice of the radiologist for RMN, CT scan, mammography, Pet-scan and ultrasound imaging.

  3. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  4. Adaptive textural segmentation of medical images

    Science.gov (United States)

    Kuklinski, Walter S.; Frost, Gordon S.; MacLaughlin, Thomas

    1992-06-01

    A number of important problems in medical imaging can be described as segmentation problems. Previous fractal-based image segmentation algorithms have used either the local fractal dimension alone or the local fractal dimension and the corresponding image intensity as features for subsequent pattern recognition algorithms. An image segmentation algorithm that utilized the local fractal dimension, image intensity, and the correlation coefficient of the local fractal dimension regression analysis computation, to produce a three-dimension feature space that was partitioned to identify specific pixels of dental radiographs as being either bone, teeth, or a boundary between bone and teeth also has been reported. In this work we formulated the segmentation process as a configurational optimization problem and discuss the application of simulated annealing optimization methods to the solution of this specific optimization problem. The configurational optimization method allows information about both, the degree of correspondence between a candidate segment and an assumed textural model, and morphological information about the candidate segment to be used in the segmentation process. To apply this configurational optimization technique with a fractal textural model however, requires the estimation of the fractal dimension of an irregularly shaped candidate segment. The potential utility of a discrete Gerchberg-Papoulis bandlimited extrapolation algorithm to the estimation of the fractal dimension of an irregularly shaped candidate segment is also discussed.

  5. Vision 20/20: perspectives on automated image segmentation for radiotherapy.

    Science.gov (United States)

    Sharp, Gregory; Fritscher, Karl D; Pekar, Vladimir; Peroni, Marta; Shusharina, Nadya; Veeraraghavan, Harini; Yang, Jinzhong

    2014-05-01

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods' strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology.

  6. Vision 20/20: Perspectives on automated image segmentation for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, Gregory, E-mail: gcsharp@partners.org; Fritscher, Karl D.; Shusharina, Nadya [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Pekar, Vladimir [Philips Healthcare, Markham, Ontario 6LC 2S3 (Canada); Peroni, Marta [Center for Proton Therapy, Paul Scherrer Institut, 5232 Villigen-PSI (Switzerland); Veeraraghavan, Harini [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Yang, Jinzhong [Department of Radiation Physics, MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2014-05-15

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods’ strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology.

  7. Automated detection of a prostate Ni-Ti stent in electronic portal images

    DEFF Research Database (Denmark)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane

    2006-01-01

    of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection...

  8. Automated Detection of Firearms and Knives in a CCTV Image.

    Science.gov (United States)

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  9. Automated Detection of Firearms and Knives in a CCTV Image

    Directory of Open Access Journals (Sweden)

    Michał Grega

    2016-01-01

    Full Text Available Closed circuit television systems (CCTV are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  10. A comparative study on medical image segmentation methods

    Directory of Open Access Journals (Sweden)

    Praylin Selva Blessy SELVARAJ ASSLEY

    2014-03-01

    Full Text Available Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disadvantages in segmenting different medical images are discussed.

  11. Curve Matching with Applications in Medical Imaging

    DEFF Research Database (Denmark)

    Bauer, Martin; Bruveris, Martins; Harms, Philipp;

    2015-01-01

    In the recent years, Riemannian shape analysis of curves and surfaces has found several applications in medical image analysis. In this paper we present a numerical discretization of second order Sobolev metrics on the space of regular curves in Euclidean space. This class of metrics has several...... desirable mathematical properties. We propose numerical solutions for the initial and boundary value problems of _nding geodesics. These two methods are combined in a Riemannian gradientbased optimization scheme to compute the Karcher mean. We apply this to a study of the shape variation in HeLa cell nuclei...

  12. Osiris: a medical image-manipulation system.

    Science.gov (United States)

    Ligier, Y; Ratib, O; Logean, M; Girard, C

    1994-01-01

    We designed a general-purpose computer program, Osiris, for the display, manipulation, and analysis of digital medical images. The program offers an intuitive, window-based interface with direct access to generic tools. Characterized by user-friendliness, portability, and extensibility, Osiris is compatible with both Unix-based and Macintosh-based platforms. It is readily modified and can be used to develop new tools. It is able to monitor the entries made during a work session and thus provide data on its use. Osiris and its source code are being distributed, free of charge, to universities and research groups around the world.

  13. Medical imaging projects meet at CERN

    CERN Multimedia

    CERN Bulletin

    2013-01-01

    ENTERVISION, the Research Training Network in 3D Digital Imaging for Cancer Radiation Therapy, successfully passed its mid-term review held at CERN on 11 January. This multidisciplinary project aims at qualifying experts in medical imaging techniques for improved hadron therapy.   ENTERVISION provides training in physics, medicine, electronics, informatics, radiobiology and engineering, as well as a wide range of soft skills, to 16 researchers of different backgrounds and nationalities. The network is funded by the European Commission within the Marie Curie Initial Training Network, and relies on the EU-funded research project ENVISION to provide a training platform for the Marie Curie researchers. The two projects hold their annual meetings jointly, allowing the young researchers to meet senior scientists and to have a full picture of the latest developments in the field beyond their individual research project. ENVISION and ENTERVISION are both co-ordinated by CERN, and the Laboratory hosts t...

  14. MMSPix - A multimedia service (MMS) medical images weblog.

    Science.gov (United States)

    Fontelo, Paul; Liu, Fang; Muin, Michael; Ducut, Erick; Ackerman, Michael; Paalan-Vasquez, Franciene

    2007-01-01

    Smartphones with cameras have added a new dimension to augmenting medical image collections for education and teleconsultation. It allows healthcare personnel to instantly capture and send images through the multimedia messaging service (MMS) protocol. We developed a searchable archive, a mobile images Weblog of camera phone images for medical education. Registered users can view and comment on uploaded images. The archive is compartmentalized to allow sharing images with all viewers and by clinical specialty groups.

  15. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  16. Medical Image Retrieval Based on Multi-Layer Resampling Template

    Institute of Scientific and Technical Information of China (English)

    WANG Xin-rui; YANG Yun-feng

    2014-01-01

    Medical image application in clinical diagnosis and treatment is becoming more and more widely, How to use a large number of images in the image management system and it is a very important issue how to assist doctors to analyze and diagnose. This paper studies the medical image retrieval based on multi-layer resampling template under the thought of the wavelet decomposition, the image retrieval method consists of two retrieval process which is coarse and fine retrieval. Coarse retrieval process is the medical image retrieval process based on the image contour features. Fine retrieval process is the medical image retrieval process based on multi-layer resampling template, a multi-layer sampling operator is employed to extract image resampling images each layer, then these resampling images are retrieved step by step to finish the process from coarse to fine retrieval.

  17. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  18. Investigation of Bias in Continuous Medical Image Label Fusion.

    Science.gov (United States)

    Xing, Fangxu; Prince, Jerry L; Landman, Bennett A

    2016-01-01

    Image labeling is essential for analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms, both of which suffer from errors. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm for both discrete-valued and continuous-valued labels has been proposed to find the consensus fusion while simultaneously estimating rater performance. In this paper, we first show that the previously reported continuous STAPLE in which bias and variance are used to represent rater performance yields a maximum likelihood solution in which bias is indeterminate. We then analyze the major cause of the deficiency and evaluate two classes of auxiliary bias estimation processes, one that estimates the bias as part of the algorithm initialization and the other that uses a maximum a posteriori criterion with a priori probabilities on the rater bias. We compare the efficacy of six methods, three variants from each class, in simulations and through empirical human rater experiments. We comment on their properties, identify deficient methods, and propose effective methods as solution.

  19. Automated Imaging System for Pigmented Skin Lesion Diagnosis

    Directory of Open Access Journals (Sweden)

    Mariam Ahmed Sheha

    2016-10-01

    Full Text Available Through the study of pigmented skin lesions risk factors, the appearance of malignant melanoma turns the anomalous occurrence of these lesions to annoying sign. The difficulty of differentiation between malignant melanoma and melanocytic naive is the error-bone problem that usually faces the physicians in diagnosis. To think through the hard mission of pigmented skin lesions diagnosis different clinical diagnosis algorithms were proposed such as pattern analysis, ABCD rule of dermoscopy, Menzies method, and 7-points checklist. Computerized monitoring of these algorithms improves the diagnosis of melanoma compared to simple naked-eye of physician during examination. Toward the serious step of melanoma early detection, aiming to reduce melanoma mortality rate, several computerized studies and procedures were proposed. Through this research different approaches with a huge number of features were discussed to point out the best approach or methodology could be followed to accurately diagnose the pigmented skin lesion. This paper proposes automated system for diagnosis of melanoma to provide quantitative and objective evaluation of skin lesion as opposed to visual assessment, which is subjective in nature. Two different data sets were utilized to reduce the effect of qualitative interpretation problem upon accurate diagnosis. Set of clinical images that are acquired from a standard camera while the other set is acquired from a special dermoscopic camera and so named dermoscopic images. System contribution appears in new, complete and different approaches presented for the aim of pigmented skin lesion diagnosis. These approaches result from using large conclusive set of features fed to different classifiers. The three main types of different features extracted from the region of interest are geometric, chromatic, and texture features. Three statistical methods were proposed to select the most significant features that will cause a valuable effect in

  20. [Consistent presentation of medical images based on CPI integration profile].

    Science.gov (United States)

    Jiang, Tao; An, Ji-ye; Chen, Zhong-yong; Lu, Xu-dong; Duan, Hui-long

    2007-11-01

    Because of different display parameters and other factors, digital medical images present different display states in different section offices of a hospital. Based on CPI integration profile of IHE, this paper implements the consistent presentation of medical images, and it is helpful for doctors to carry out medical treatments of teamwork.

  1. Automated Identification of Rivers and Shorelines in Aerial Imagery Using Image Texture

    Science.gov (United States)

    2011-01-01

    defining the criteria for segmenting the image. For these cases certain automated, unsupervised (or minimally supervised), image classification ...banks, image analysis, edge finding, photography, satellite, texture, entropy 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b. ABSTRACT...high resolution bank geometry. Much of the globe is covered by various sorts of multi- or hyperspectral imagery and numerous techniques have been

  2. Rough sets and near sets in medical imaging: a review.

    Science.gov (United States)

    Hassanien, Aboul Ella; Abraham, Ajith; Peters, James F; Schaefer, Gerald; Henry, Christopher

    2009-11-01

    This paper presents a review of the current literature on rough-set- and near-set-based approaches to solving various problems in medical imaging such as medical image segmentation, object extraction, and image classification. Rough set frameworks hybridized with other computational intelligence technologies that include neural networks, particle swarm optimization, support vector machines, and fuzzy sets are also presented. In addition, a brief introduction to near sets and near images with an application to MRI images is given. Near sets offer a generalization of traditional rough set theory and a promising approach to solving the medical image correspondence problem as well as an approach to classifying perceptual objects by means of features in solving medical imaging problems. Other generalizations of rough sets such as neighborhood systems, shadowed sets, and tolerance spaces are also briefly considered in solving a variety of medical imaging problems. Challenges to be addressed and future directions of research are identified and an extensive bibliography is also included.

  3. Guideline report. Medical ultrasound imaging: progress and opportunities.

    Science.gov (United States)

    Burns, M

    1989-01-01

    Utilization of medical ultrasound has expanded rapidly during the past several years. In 1988, sales of ultrasound equipment will approach $600 million, which is higher than any other individual imaging modality, including the most capital intensive, such as magnetic resonance imaging (MRI), computed tomography (CT), and cath lab angiography. This growth would have been difficult to predict previously, since ultrasound appeared to be a relatively mature imaging modality not too long ago. There are several reasons for this growth. Technological developments have been quite rapid; ultrasound has become easier to use, image quality has improved dramatically, and diagnostic accuracy has been enhanced. There has been a proliferation of new equipment at all ends of the price spectrum, allowing the user a wide choice in instrument performance, multi-function capabilities, and automated features to increase patient throughput. The DRG environment and the prospect for more pre-admission tests have also been a stimulus. Hospital buying activity has expanded, and many more ultrasound exams are now being conducted on an outpatient basis. Sales to freestanding imaging centers and individual physicians have similarly increased. The hospital user is willing to pay a large premium for advanced technical performance and is prepared to retire or replace older technology in less than three years. This replacement cycle is much shorter than the four to five year period which existed prior to 1985. By comparison, some of the more traditional imaging areas, such as radiology, have replacement rates of eight to ten years. The reason for early replacement is obvious. Ultrasound exams in hospitals generate revenues at a rate that justifies the purchase of the most advanced equipment. It also improves the referral rate and positions the hospital as a high quality provider. Even with low utilization rates, an ultrasound instrument can normally pay for itself in less than one year of regular

  4. A New Medical Image Enhancement Based on Human Visual Characteristics

    Institute of Scientific and Technical Information of China (English)

    DONG Ai-bin; HE Jun

    2013-01-01

    Study of image enhancement shows that the quality of image heavily relies on human visual system. In this paper, we apply this fact effectively to design a new image enhancement method for medical images that improves the detail regions. First, the eye region of interest (ROI) is segmented; then the Un-sharp Masking (USM) is used to enhance the detail regions. Experiments show that the proposed method can effectively improve the accuracy of medical image enhancement and has a significant effect.

  5. An online interactive simulation system for medical imaging education.

    Science.gov (United States)

    Dikshit, Aditya; Wu, Dawei; Wu, Chunyan; Zhao, Weizhao

    2005-09-01

    This report presents a recently developed web-based medical imaging simulation system for teaching students or other trainees who plan to work in the medical imaging field. The increased importance of computer and information technology widely applied to different imaging techniques in clinics and medical research necessitates a comprehensive medical imaging education program. A complete tutorial of simulations introducing popular imaging modalities, such as X-ray, MRI, CT, ultrasound and PET, forms an essential component of such an education. Internet technologies provide a vehicle to carry medical imaging education online. There exist a number of internet-based medical imaging hyper-books or online documentations. However, there are few providing interactive computational simulations. We focus on delivering knowledge of the physical principles and engineering implementation of medical imaging techniques through an interactive website environment. The online medical imaging simulation system presented in this report outlines basic principles underlying different imaging techniques and image processing algorithms and offers trainees an interactive virtual laboratory. For education purposes, this system aims to provide general understanding of each imaging modality with comprehensive explanations, ample illustrations and copious references as its thrust, rather than complex physics or detailed math. This report specifically describes the development of the tutorial for commonly used medical imaging modalities. An internet-accessible interface is used to simulate various imaging algorithms with user-adjustable parameters. The tutorial is under the MATLAB Web Server environment. Macromedia Director MX is used to develop interactive animations integrating theory with graphic-oriented simulations. HTML and JavaScript are used to enable a user to explore these modules online in a web browser. Numerous multiple choice questions, links and references for advanced study are

  6. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Science.gov (United States)

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed

  7. Medical Image Analysis by Cognitive Information Systems - a Review.

    Science.gov (United States)

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  8. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    Science.gov (United States)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  9. Automating and estimating glomerular filtration rate for dosing medications and staging chronic kidney disease

    Directory of Open Access Journals (Sweden)

    Trinkley KE

    2014-05-01

    Full Text Available Katy E Trinkley,1 S Michelle Nikels,2 Robert L Page II,1 Melanie S Joy11Skaggs School of Pharmacy and Pharmaceutical Sciences, 2School of Medicine, University of Colorado, Aurora, CO, USA Objective: The purpose of this paper is to serve as a review for primary care providers on the bedside methods for estimating glomerular filtration rate (GFR for dosing and chronic kidney disease (CKD staging and to discuss how automated health information technologies (HIT can enhance clinical documentation of staging and reduce medication errors in patients with CKD.Methods: A nonsystematic search of PubMed (through March 2013 was conducted to determine the optimal approach to estimate GFR for dosing and CKD staging and to identify examples of how automated HITs can improve health outcomes in patients with CKD. Papers known to the authors were included, as were scientific statements. Articles were chosen based on the judgment of the authors.Results: Drug-dosing decisions should be based on the method used in the published studies and package labeling that have been determined to be safe, which is most often the Cockcroft–Gault formula unadjusted for body weight. Although Modification of Diet in Renal Disease is more commonly used in practice for staging, the CKD–Epidemiology Collaboration (CKD–EPI equation is the most accurate formula for estimating the CKD staging, especially at higher GFR values. Automated HITs offer a solution to the complexity of determining which equation to use for a given clinical scenario. HITs can educate providers on which formula to use and how to apply the formula in a given clinical situation, ultimately improving appropriate medication and medical management in CKD patients.Conclusion: Appropriate estimation of GFR is key to optimal health outcomes. HITs assist clinicians in both choosing the most appropriate GFR estimation formula and in applying the results of the GFR estimation in practice. Key limitations of the

  10. The Mutual Beneficial Effect between Medical Imaging and Nanomedicine

    Directory of Open Access Journals (Sweden)

    Huiting Qiao

    2013-01-01

    Full Text Available The reports on medical imaging and nanomedicine are getting more and more prevalent. Many nanoparticles entering into the body act as contrast agents, or probes in medical imaging, which are parts of nanomedicines. The application extent and the quality of imaging have been improved by nanotechnique. On one hand, nanomedicines advance the sensitivity and specificity of molecular imaging. On the other hand, the biodistribution of nanomedicine can also be studied in vivo by medical imaging, which is necessary in the toxicological research. The toxicity of nanomedicine is a concern which may slow down the application of nanomedical. The quantitative description of the kinetic process is significant. Based on metabolic study on radioactivity tracer, a scheme of pharmacokinetic research of nanomedicine is proposed. In this review, we will discuss the potential advantage of medical imaging in toxicology of nanomedicine, as well as the advancement of medical imaging prompted by nanomedicine.

  11. SU-C-304-04: A Compact Modular Computational Platform for Automated On-Board Imager Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Dolly, S [Washington University School of Medicine, Saint Louis, MO (United States); University of Missouri, Columbia, MO (United States); Cai, B; Chen, H; Anastasio, M; Sun, B; Yaddanapudi, S; Noel, C; Goddu, S; Mutic, S; Li, H [Washington University School of Medicine, Saint Louis, MO (United States); Tan, J [UTSouthwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    quality assurance tests, such as 2D/3D image quality, making completely automated QA possible. Research Funding from Varian Medical Systems Inc. . Dr. Sasa Mutic receives compensation for providing patient safety training services from Varian Medical Systems, the sponsor of this study.

  12. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  13. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  14. Organizing and accessing methods for massive medical microscopic image data

    Science.gov (United States)

    Deng, Yan; Tang, Lixin

    2007-12-01

    The development of electronic medical archives requests to mosaic the medical microscopic images to a whole one, and the stitching result is usually a massive file hard to be stored or accessed. The paper proposes a file format named Medical TIFF to organize the massive microscopic image data. The Medical TIFF organizes the massive image data in tiles, appends the thumbnail of the result image at the end of the file, and offers the way to add medical information into the image file. Then the paper designs a three-layer system to access the file: the Physical Layer gathers the Medical TIFF components dispersed over the file and organizes them hierarchically, the Logical Layer uses a two dimensional dynamic array to deal with the tiles, and the Application Layer provides the interfaces for the applications developed on the basis of the system.

  15. AMIsurvey, chimenea and other tools: Automated imaging for transient surveys with existing radio-observatories

    CERN Document Server

    Staley, Tim D

    2015-01-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, making use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. These packages...

  16. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik

    2007-01-01

    PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...... for localization of lesions in the PET images in the feature extraction process. Eight features from each examination were used as inputs to artificial neural networks trained to classify the images. Thereafter, the performance of the network was evaluated in the test set. RESULTS: The performance of the automated...... method measured as the area under the receiver operating characteristic curve, was 0.97 in the test group, with an accuracy of 92%. The sensitivity was 86% at a specificity of 100%. CONCLUSIONS: A completely automated method using artificial neural networks can be used to detect lung cancer...

  17. Extending and applying active appearance models for automated, high precision segmentation in different image modalities

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Fisker, Rune; Ersbøll, Bjarne Kjær

    2001-01-01

    , an initialization scheme is designed thus making the usage of AAMs fully automated. Using these extensions it is demonstrated that AAMs can segment bone structures in radiographs, pork chops in perspective images and the left ventricle in cardiovascular magnetic resonance images in a robust, fast and accurate...

  18. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    Science.gov (United States)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  19. Medical image compression with embedded-wavelet transform

    Science.gov (United States)

    Cheng, Po-Yuen; Lin, Freddie S.; Jannson, Tomasz

    1997-10-01

    The need for effective medical image compression and transmission techniques continues to grow because of the huge volume of radiological images captured each year. The limited bandwidth and efficiency of current networking systems cannot meet this need. In response, Physical Optics Corporation devised an efficient medical image management system to significantly reduce the storage space and transmission bandwidth required for digitized medical images. The major functions of this system are: (1) compressing medical imagery, using a visual-lossless coder, to reduce the storage space required; (2) transmitting image data progressively, to use the transmission bandwidth efficiently; and (3) indexing medical imagery according to image characteristics, to enable automatic content-based retrieval. A novel scalable wavelet-based image coder was developed to implement the system. In addition to its high compression, this approach is scalable in both image size and quality. The system provides dramatic solutions to many medical image handling problems. One application is the efficient storage and fast transmission of medical images over picture archiving and communication systems. In addition to reducing costs, the potential impact on improving the quality and responsiveness of health care delivery in the US is significant.

  20. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    Directory of Open Access Journals (Sweden)

    Tözeren Aydın

    2007-09-01

    Full Text Available Abstract Background Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Methods Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Results Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Conclusion Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development.

  1. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  2. Implementation of Novel Medical Image Compression Using Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Mohammad Al-Rababah

    2016-05-01

    Full Text Available The medical image processing process is one of the most important areas of research in medical applications in digitized medical information. A medical images have a large sizes. Since the coming of digital medical information, the important challenge is to care for the conduction and requirements of huge data, including medical images. Compression is considered as one of the necessary algorithm to explain this problem. A large amount of medical images must be compressed using lossless compression. This paper proposes a new medical image compression algorithm founded on lifting wavelet transform CDF 9/7 joined with SPIHT coding algorithm, this algorithm applied the lifting composition to confirm the benefit of the wavelet transform. To develop the proposed algorithm, the outcomes compared with other compression algorithm like JPEG codec. Experimental results proves that the anticipated algorithm is superior to another algorithm in both lossy and lossless compression for all medical images tested. The Wavelet-SPIHT algorithm provides PSNR very important values for MRI images.

  3. SOFTWARE FOR REGIONS OF INTEREST RETRIEVAL ON MEDICAL 3D IMAGES

    Directory of Open Access Journals (Sweden)

    G. G. Stromov

    2014-01-01

    Full Text Available Background. Implementation of software for areas of interest retrieval in 3D medical images is described in this article. It has been tested against large volume of model MRIs.Material and methods. We tested software against normal and pathological (severe multiple sclerosis model MRIs from tge BrainWeb resource. Technological stack is based on open-source cross-platform solutions. We implemented storage system on Maria DB (an open-sourced fork of MySQL with P/SQL extensions. Python 2.7 scripting was used for automatization of extract-transform-load operations. The computational core is written on Java 7 with Spring framework 3. MongoDB was used as a cache in the cluster of workstations. Maven 3 was chosen as a dependency manager and build system, the project is hosted at Github.Results. As testing on SSMU's LAN has showed, software has been developed is quite efficiently retrieves ROIs are matching for the morphological substratum on pathological MRIs.Conclusion. Automation of a diagnostic process using medical imaging allows to level down the subjective component in decision making and increase the availability of hi-tech medicine. Software has shown in the article is a complex solution for ROI retrieving and segmentation process on model medical images in full-automated mode.We would like to thank Robert Vincent for great help with consulting of usage the BrainWeb resource.

  4. Improved Strategies for Parallel Medical Image Processing Applications

    Institute of Scientific and Technical Information of China (English)

    WANG Kun; WANG Xiao-ying; LI San-li; CHEN Ying

    2008-01-01

    In order to meet the demands of high efficient and real-time computer assisted diagnosis as well as screening in medical area, to improve the efficacy of parallel medical image processing is of great importance. This article proposes improved strategies for parallel medical image processing applications,which is categorized into two genera. For each genus individual strategy is devised, including the theoretic algorithm for minimizing the exertion time. Experiment using mammograms not only justifies the validity of the theoretic analysis, with reasonable difference between the theoretic and measured value, but also shows that when adopting the improved strategies, efficacy of medical image parallel processing is improved greatly.

  5. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    Directory of Open Access Journals (Sweden)

    Mohendra Roy

    2016-05-01

    Full Text Available Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al., we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings.

  6. Glasses-free 3D viewing systems for medical imaging

    Science.gov (United States)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  7. Data Hiding Scheme on Medical Image using Graph Coloring

    Science.gov (United States)

    Astuti, Widi; Adiwijaya; Novia Wisety, Untari

    2015-06-01

    The utilization of digital medical images is now widely spread[4]. The medical images is supposed to get protection since it has probability to pass through unsecure network. Several watermarking techniques have been developed so that the digital medical images can be guaranteed in terms of its originality. In watermarking, the medical images becomes a protected object. Nevertheless, the medical images can actually be a medium of hiding secret data such as patient medical record. The data hiding is done by inserting data into image - usually called steganography in images. Because the medical images can influence the diagnose change, steganography will only be applied to non-interest region. Vector Quantization (VQ) is one of lossydata compression technique which is sufficiently prominent and frequently used. Generally, the VQ based steganography scheme still has limitation in terms of the data capacity which can be inserted. This research is aimed to make a Vector Quantization-based steganography scheme and graph coloring. The test result shows that the scheme can insert 28768 byte data which equals to 10077 characters for images area of 3696 pixels.

  8. Medical image of the week: focal myopericaditis

    Directory of Open Access Journals (Sweden)

    Meenakshisundaram C

    2015-07-01

    Full Text Available No abstract available. Article truncated at 150 words. A 44-year-old man with no significant past medical history was admitted with a history of two episodes of substernal chest pain unrelated to exertion which had resolved spontaneously. Admission vital signs were within normal limits and physical examination was unremarkable. Basic lab tests were normal and urine toxicology was negative. Electrocardiogram was unremarkable with no ST/T changes. Troponin I was elevated at 4.19 which trended up to 6.57. An urgent cardiac angiogram was done which revealed normal patent coronaries. His transthoracic echocardiogram was also reported to be normal. He continued to have intermittent episodes of chest pain that was partially relieved by morphine. Erythrocyte sedimentation rate and C-reactive protein were elevated. Work up for autoimmune diseases, vasculitis, myocarditis panel were insignificant. Later, magnetic resonance imaging (MRI with gadolinium enhanced contrast (Figure 1 was obtained which showed abnormal epicardial/subepicardial myocardial enhancement within the inferolateral wall and cardiac apex consistent with focal ...

  9. Medical image of the week: pneumomediastinum

    Directory of Open Access Journals (Sweden)

    Franco R Jr

    2014-01-01

    Full Text Available No abstract available. Article truncated at 150 words. A 65 year old man presented with mild increase in shortness of breath. He had a past medical history of diabetes mellitus, hypertension, and severe malnutrition with percutaneous endoscopic gastrostomy (PEG placement after a colectomy and end ileostomy for sigmoid volvulus. CXR (Figure 1 suggested a pneumomediastinum with subsequent chest CT (Figure 2 confirming moderate sized pneumomediastinum. He had a chronic cough from chronic obstructive pulmonary disease (COPD as well as aspiration and chest CT also demonstrated emphysema with small blebs. He denied any significant chest pain. He was followed conservatively with imaging and discharged in stable condition. Pneumomediastinum can be caused by trauma, esophageal rupture after vomiting (Boerhaave’s syndrome and can be a spontaneous event if no obvious precipitating cause is identified (1. Valsalva maneuvers such as cough, sneeze, vomiting and childbirth, can all cause pneumomediastinum. Risk factors include asthma, COPD, interstitial lung disease and inhalational recreational drug use. …

  10. Medical image of the week: purpura fulminans

    Directory of Open Access Journals (Sweden)

    Power EP

    2016-12-01

    Full Text Available No abstract available. Article truncated at 150 words. A 54-year-old man with coronary artery disease, fibromyalgia and chronic sacral ulcers was brought to the emergency department due to acute changes in mentation and hypotension. He suffered a cardiac arrest shortly after arrival to the emergency department during emergent airway management. After successful resuscitation, he was admitted to the medical intensive care unit and treated for septic shock with fluid resuscitation, vasopressors and broad spectrum antibiotics. Laboratory results were significant for disseminated intravascular coagulopathy (DIC- thrombocytopenia, decreased fibrinogen and elevated PT, PTT and D-dimer levels. Profound metabolic acidosis and lactate elevation was also seen. Blood Cultures later revealed a multi-drug resistant E. coli bacteremia. Images of the lower extremities (Figure 1 were obtained at initial assessment and are consistent with purpura fulminans. He did not survive the stay. Purpura fulminans, also referred to as skin mottling, is an evolving skin condition which is characterized by an acutely worsening reticular …

  11. Synthetic Aperture Imaging in Medical Ultrasound

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Gammelmark, Kim; Pedersen, Morten

    2004-01-01

    with high precision, and the imaging is easily extended to real-time 3D scanning. This paper presents the work done at the Center for Fast Ultrasound Imaging in the area of SA imaging. Three areas that benefit from SA imaging are described. Firstly a preliminary in-vivo evaluation comparing conventional B......Synthetic Aperture (SA) ultrasound imaging is a relatively new and unexploited imaging technique. The images are perfectly focused both in transmit and receive, and have a better resolution and higher dynamic range than conventional ultrasound images. The blood flow can be estimated from SA images...

  12. Supervised variational model with statistical inference and its application in medical image segmentation.

    Science.gov (United States)

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  13. Current trends in medical image registration and fusion

    Directory of Open Access Journals (Sweden)

    Fatma El-Zahraa Ahmed El-Gamal

    2016-03-01

    Full Text Available Recently, medical image registration and fusion processes are considered as a valuable assistant for the medical experts. The role of these processes arises from their ability to help the experts in the diagnosis, following up the diseases’ evolution, and deciding the necessary therapies regarding the patient’s condition. Therefore, the aim of this paper is to focus on medical image registration as well as medical image fusion. In addition, the paper presents a description of the common diagnostic images along with the main characteristics of each of them. The paper also illustrates most well-known toolkits that have been developed to help the working with the registration and fusion processes. Finally, the paper presents the current challenges associated with working with medical image registration and fusion through illustrating the recent diseases/disorders that were addressed through such an analyzing process.

  14. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  15. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Directory of Open Access Journals (Sweden)

    Hongsheng Bi

    Full Text Available Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%. First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that

  16. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.

  17. Multiview locally linear embedding for effective medical image retrieval.

    Directory of Open Access Journals (Sweden)

    Hualei Shen

    Full Text Available Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE, principal component analysis (PCA, or laplacian eigenmaps (LE can be employed to reduce the "curse of dimensionality". Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods.

  18. Study on the Medical Image Distributed Dynamic Processing Method

    Institute of Scientific and Technical Information of China (English)

    张全海; 施鹏飞

    2003-01-01

    To meet the challenge of implementing rapidly advanced, time-consuming medical image processing algorithms,it is necessary to develop a medical image processing technology to process a 2D or 3D medical image dynamically on the web. But in a premier system, only static image processing can be provided with the limitation of web technology. The development of Java and CORBA (common object request broker architecture) overcomes the shortcoming of the web static application and makes the dynamic processing of medical images on the web available. To develop an open solution of distributed computing, we integrate the Java, and web with the CORBA and present a web-based medical image dynamic processing methed, which adopts Java technology as the language to program application and components of the web and utilies the CORBA architecture to cope with heterogeneous property of a complex distributed system. The method also provides a platform-independent, transparent processing architecture to implement the advanced image routines and enable users to access large dataset and resources according to the requirements of medical applications. The experiment in this paper shows that the medical image dynamic processing method implemented on the web by using Java and the CORBA is feasible.

  19. A fully automated method for quantifying and localizing white matter hyperintensities on MR images.

    Science.gov (United States)

    Wu, Minjie; Rosano, Caterina; Butters, Meryl; Whyte, Ellen; Nable, Megan; Crooks, Ryan; Meltzer, Carolyn C; Reynolds, Charles F; Aizenstein, Howard J

    2006-12-01

    White matter hyperintensities (WMH), commonly found on T2-weighted FLAIR brain MR images in the elderly, are associated with a number of neuropsychiatric disorders, including vascular dementia, Alzheimer's disease, and late-life depression. Previous MRI studies of WMHs have primarily relied on the subjective and global (i.e., full-brain) ratings of WMH grade. In the current study we implement and validate an automated method for quantifying and localizing WMHs. We adapt a fuzzy-connected algorithm to automate the segmentation of WMHs and use a demons-based image registration to automate the anatomic localization of the WMHs using the Johns Hopkins University White Matter Atlas. The method is validated using the brain MR images acquired from eleven elderly subjects with late-onset late-life depression (LLD) and eight elderly controls. This dataset was chosen because LLD subjects are known to have significant WMH burden. The volumes of WMH identified in our automated method are compared with the accepted gold standard (manual ratings). A significant correlation of the automated method and the manual ratings is found (Pdepression. Progress in Neuro-Psychopharmacology and Biological Psychiatry. 27 (3), 539-544.]), we found there was a significantly greater WMH burden in the LLD subjects versus the controls for both the manual and automated method. The effect size was greater for the automated method, suggesting that it is a more specific measure. Additionally, we describe the anatomic localization of the WMHs in LLD subjects as well as in the control subjects, and detect the regions of interest (ROIs) specific for the WMH burden of LLD patients. Given the emergence of large NeuroImage databases, techniques, such as that described here, will allow for a better understanding of the relationship between WMHs and neuropsychiatric disorders.

  20. A New Method of CT MedicalImages Contrast Enhancement

    Institute of Scientific and Technical Information of China (English)

    SUNFeng-rong; LIUWei; WANGChang-yu; MEILiang-mo

    2004-01-01

    A new method of contrast enhancement is proposed in the paper using multiscale edge representation of images, and is applied to the field of CT medical image processing. Comparing to the traditional Window technique, our method is adaptive and meets the demand of radiology clinics more better. The clinical experiment results show the practicality and the potential applied value of our methodin the field of CT medical images contrast enhancement.

  1. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    Science.gov (United States)

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-03-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.

  2. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images.

  3. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    Science.gov (United States)

    Karagiannis, Georgios; Antón Castro, Francesc; Mioc, Darka

    2016-06-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detected are invariant to image rotations, translations, scaling and also to changes in illumination, brightness and 3-dimensional viewpoint. Afterwards, each feature of the reference image is matched with one in the sensed image if, and only if, the distance between them multiplied by a threshold is shorter than the distances between the point and all the other points in the sensed image. Then, the matched features are used to compute the parameters of the homography that transforms the coordinate system of the sensed image to the coordinate system of the reference image. The Delaunay triangulations of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches.

  4. Microscopic images dataset for automation of RBCs counting.

    Science.gov (United States)

    Abbas, Sherif

    2015-12-01

    A method for Red Blood Corpuscles (RBCs) counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs) images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  5. Automated quadrilateral mesh generation for digital image structures

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    With the development of advanced imaging technology, digital images are widely used. This paper proposes an automatic quadrilateral mesh generation algorithm for multi-colour imaged structures. It takes an original arbitrary digital image as an input for automatic quadrilateral mesh generation, this includes removing the noise, extracting and smoothing the boundary geometries between different colours, and automatic all-quad mesh generation with the above boundaries as constraints. An application example is...

  6. Microscopic images dataset for automation of RBCs counting

    Directory of Open Access Journals (Sweden)

    Sherif Abbas

    2015-12-01

    Full Text Available A method for Red Blood Corpuscles (RBCs counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  7. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  8. Nonrigid Medical Image Registration Based on Mesh Deformation Constraints

    Directory of Open Access Journals (Sweden)

    XiangBo Lin

    2013-01-01

    Full Text Available Regularizing the deformation field is an important aspect in nonrigid medical image registration. By covering the template image with a triangular mesh, this paper proposes a new regularization constraint in terms of connections between mesh vertices. The connection relationship is preserved by the spring analogy method. The method is evaluated by registering cerebral magnetic resonance imaging (MRI image data obtained from different individuals. Experimental results show that the proposed method has good deformation ability and topology-preserving ability, providing a new way to the nonrigid medical image registration.

  9. An automated detection for axonal boutons in vivo two-photon imaging of mouse

    Science.gov (United States)

    Li, Weifu; Zhang, Dandan; Xie, Qiwei; Chen, Xi; Han, Hua

    2017-02-01

    Activity-dependent changes in the synaptic connections of the brain are tightly related to learning and memory. Previous studies have shown that essentially all new synaptic contacts were made by adding new partners to existing synaptic elements. To further explore synaptic dynamics in specific pathways, concurrent imaging of pre and postsynaptic structures in identified connections is required. Consequently, considerable attention has been paid for the automated detection of axonal boutons. Different from most previous methods proposed in vitro data, this paper considers a more practical case in vivo neuron images which can provide real time information and direct observation of the dynamics of a disease process in mouse. Additionally, we present an automated approach for detecting axonal boutons by starting with deconvolving the original images, then thresholding the enhanced images, and reserving the regions fulfilling a series of criteria. Experimental result in vivo two-photon imaging of mouse demonstrates the effectiveness of our proposed method.

  10. A review of automated image understanding within 3D baggage computed tomography security screening.

    Science.gov (United States)

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  11. A survey of GPU-based medical image computing techniques.

    Science.gov (United States)

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming; Wang, Defeng

    2012-09-01

    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine.

  12. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging...

  13. Automated quantification of budding Saccharomyces cerevisiae using a novel image cytometry method.

    Science.gov (United States)

    Laverty, Daniel J; Kury, Alexandria L; Kuksin, Dmitry; Pirani, Alnoor; Flanagan, Kevin; Chan, Leo Li-Ying

    2013-06-01

    The measurements of concentration, viability, and budding percentages of Saccharomyces cerevisiae are performed on a routine basis in the brewing and biofuel industries. Generation of these parameters is of great importance in a manufacturing setting, where they can aid in the estimation of product quality, quantity, and fermentation time of the manufacturing process. Specifically, budding percentages can be used to estimate the reproduction rate of yeast populations, which directly correlates with metabolism of polysaccharides and bioethanol production, and can be monitored to maximize production of bioethanol during fermentation. The traditional method involves manual counting using a hemacytometer, but this is time-consuming and prone to human error. In this study, we developed a novel automated method for the quantification of yeast budding percentages using Cellometer image cytometry. The automated method utilizes a dual-fluorescent nucleic acid dye to specifically stain live cells for imaging analysis of unique morphological characteristics of budding yeast. In addition, cell cycle analysis is performed as an alternative method for budding analysis. We were able to show comparable yeast budding percentages between manual and automated counting, as well as cell cycle analysis. The automated image cytometry method is used to analyze and characterize corn mash samples directly from fermenters during standard fermentation. Since concentration, viability, and budding percentages can be obtained simultaneously, the automated method can be integrated into the fermentation quality assurance protocol, which may improve the quality and efficiency of beer and bioethanol production processes.

  14. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  15. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    Science.gov (United States)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2010-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the

  16. Advanced automated gain adjustments for in-vivo ultrasound imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo

    2015-01-01

    Automatic gain adjustments are necessary on the state-of-the-art ultrasound scanners to obtain optimal scan quality, while reducing the unnecessary user interactions with the scanner. However, when large anechoic regions exist in the scan plane, the sudden and drastic variation of attenuations...... in the scanned media complicates the gain compensation. This paper presents an advanced and automated gain adjustment method that precisely compensate for the gains on scans and dynamically adapts to the drastic attenuation variations between different media. The proposed algorithm makes use of several...

  17. Real Time Medical Image Consultation System Through Internet

    Directory of Open Access Journals (Sweden)

    D. Durga Prasad

    2010-01-01

    Full Text Available Teleconsultation among doctors using a telemedicine system typically involves dealing with and sharing medical images of the patients. This paper describes a software tool written in Java which enables the participating doctors to view medical images such as blood slides, X-Ray, USG, ECG etc. online and even allows them to mark and/or zoom specific areas. It is a multi-party secure image communication system tool that can be used by doctors and medical consultants over the Internet.

  18. A New Approach To Embed Medical Information Into Medical Images

    Directory of Open Access Journals (Sweden)

    Esra Ayça Güzeldereli

    2013-08-01

    Full Text Available In recent years, under the light of developments in the field of computer, there has been an increasing demand for data processing in the health sector. Many different methods are being used to connect the personal information or diagnosis with the patient. These methods can differ from each other according to imaging techniques. In this thesis, this kind of data hiding/embedding techniques are mostly prefered in order to provide a privacy for patients. Also, useful to use compression techniques with data compressing for preserving the originality of the image which is damaged by large size of personal information saved in memory.

  19. Novel automated motion compensation technique for producing cumulative maximum intensity subharmonic images.

    Science.gov (United States)

    Dave, Jaydev K; Forsberg, Flemming

    2009-09-01

    The aim of this study was to develop a novel automated motion compensation algorithm for producing cumulative maximum intensity (CMI) images from subharmonic imaging (SHI) of breast lesions. SHI is a nonlinear contrast-specific ultrasound imaging technique in which pulses are received at half the frequency of the transmitted pulses. A Logiq 9 scanner (GE Healthcare, Milwaukee, WI, USA) was modified to operate in grayscale SHI mode (transmitting/receiving at 4.4/2.2 MHz) and used to scan 14 women with 16 breast lesions. Manual CMI images were reconstructed by temporal maximum-intensity projection of pixels traced from the first frame to the last. In the new automated technique, the user selects a kernel in the first frame and the algorithm then uses the sum of absolute difference (SAD) technique to identify motion-induced displacements in the remaining frames. A reliability parameter was used to estimate the accuracy of the motion tracking based on the ratio of the minimum SAD to the average SAD. Two thresholds (the mean and 85% of the mean reliability parameter) were used to eliminate images plagued by excessive motion and/or noise. The automated algorithm was compared with the manual technique for computational time, correction of motion artifacts, removal of noisy frames and quality of the final image. The automated algorithm compensated for motion artifacts and noisy frames. The computational time was 2 min compared with 60-90 minutes for the manual method. The quality of the motion-compensated CMI-SHI images generated by the automated technique was comparable to the manual method and provided a snapshot of the microvasculature showing interconnections between vessels, which was less evident in the original data. In conclusion, an automated algorithm for producing CMI-SHI images has been developed. It eliminates the need for manual processing and yields reproducible images, thereby increasing the throughput and efficiency of reconstructing CMI-SHI images. The

  20. Automated detection of cardiac phase from intracoronary ultrasound image sequences.

    Science.gov (United States)

    Sun, Zheng; Dong, Yi; Li, Mengchan

    2015-01-01

    Intracoronary ultrasound (ICUS) is a widely used interventional imaging modality in clinical diagnosis and treatment of cardiac vessel diseases. Due to cyclic cardiac motion and pulsatile blood flow within the lumen, there exist changes of coronary arterial dimensions and relative motion between the imaging catheter and the lumen during continuous pullback of the catheter. The action subsequently causes cyclic changes to the image intensity of the acquired image sequence. Information on cardiac phases is implied in a non-gated ICUS image sequence. A 1-D phase signal reflecting cardiac cycles was extracted according to cyclical changes in local gray-levels in ICUS images. The local extrema of the signal were then detected to retrieve cardiac phases and to retrospectively gate the image sequence. Results of clinically acquired in vivo image data showed that the average inter-frame dissimilarity of lower than 0.1 was achievable with our technique. In terms of computational efficiency and complexity, the proposed method was shown to be competitive when compared with the current methods. The average frame processing time was lower than 30 ms. We effectively reduced the effect of image noises, useless textures, and non-vessel region on the phase signal detection by discarding signal components caused by non-cardiac factors.

  1. [A medical image semantic modeling based on hierarchical Bayesian networks].

    Science.gov (United States)

    Lin, Chunyi; Ma, Lihong; Yin, Junxun; Chen, Jianyu

    2009-04-01

    A semantic modeling approach for medical image semantic retrieval based on hierarchical Bayesian networks was proposed, in allusion to characters of medical images. It used GMM (Gaussian mixture models) to map low-level image features into object semantics with probabilities, then it captured high-level semantics through fusing these object semantics using a Bayesian network, so that it built a multi-layer medical image semantic model, aiming to enable automatic image annotation and semantic retrieval by using various keywords at different semantic levels. As for the validity of this method, we have built a multi-level semantic model from a small set of astrocytoma MRI (magnetic resonance imaging) samples, in order to extract semantics of astrocytoma in malignant degree. Experiment results show that this is a superior approach.

  2. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    Netten, van Jaap J.; Baal, van Jeff G.; Liu, Chanjuan; Heijden, van der Ferdi; Bus, Sicco A.

    2013-01-01

    Background: Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the ap

  3. An Automated Method for Semantic Classification of Regions in Coastal Images

    NARCIS (Netherlands)

    Hoonhout, B.M.; Radermacher, M.; Baart, F.; Van der Maaten, L.J.P.

    2015-01-01

    Large, long-term coastal imagery datasets are nowadays a low-cost source of information for various coastal research disciplines. However, the applicability of many existing algorithms for coastal image analysis is limited for these large datasets due to a lack of automation and robustness. Therefor

  4. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    NARCIS (Netherlands)

    Lee, K.; Buitendijk, G.H.; Bogunovic, H.; Springelkamp, H.; Hofman, A.; Wahle, A.; Sonka, M.; Vingerling, J.R.; Klaver, C.C.W.; Abramoff, M.D.

    2016-01-01

    PURPOSE: To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. METHODS: Six hundred ninety macular SD-OCT image volumes (6.0 x 6.0 x 2.3 mm3) we

  5. Automated Selection of Uniform Regions for CT Image Quality Detection

    CERN Document Server

    Naeemi, Maitham D; Roychodhury, Sohini

    2016-01-01

    CT images are widely used in pathology detection and follow-up treatment procedures. Accurate identification of pathological features requires diagnostic quality CT images with minimal noise and artifact variation. In this work, a novel Fourier-transform based metric for image quality (IQ) estimation is presented that correlates to additive CT image noise. In the proposed method, two windowed CT image subset regions are analyzed together to identify the extent of variation in the corresponding Fourier-domain spectrum. The two square windows are chosen such that their center pixels coincide and one window is a subset of the other. The Fourier-domain spectral difference between these two sub-sampled windows is then used to isolate spatial regions-of-interest (ROI) with low signal variation (ROI-LV) and high signal variation (ROI-HV), respectively. Finally, the spatial variance ($var$), standard deviation ($std$), coefficient of variance ($cov$) and the fraction of abdominal ROI pixels in ROI-LV ($\

  6. Backpropagation Neural Network Implementation for Medical Image Compression

    Directory of Open Access Journals (Sweden)

    Kamil Dimililer

    2013-01-01

    Full Text Available Medical images require compression, before transmission or storage, due to constrained bandwidth and storage capacity. An ideal image compression system must yield high-quality compressed image with high compression ratio. In this paper, Haar wavelet transform and discrete cosine transform are considered and a neural network is trained to relate the X-ray image contents to their ideal compression method and their optimum compression ratio.

  7. A New Approach To Embed Medical Information Into Medical Images

    OpenAIRE

    Güzeldereli, Esra Ayça; Doğan, Ferdi; Çetin, Özdemir

    2013-01-01

    In recent years, under the light of developments in the field of computer, there has been an increasing demand for data processing in the health sector. Many different methods are being used to connect the personal information or diagnosis with the patient. These methods can differ from each other according to imaging techniques. In this thesis, this kind of data hiding/embedding techniques are mostly prefered in order to provide a privacy for patients. Also, useful to use compression techniq...

  8. Medical Image Compression using Wavelet Decomposition for Prediction Method

    CERN Document Server

    Ramesh, S M

    2010-01-01

    In this paper offers a simple and lossless compression method for compression of medical images. Method is based on wavelet decomposition of the medical images followed by the correlation analysis of coefficients. The correlation analyses are the basis of prediction equation for each sub band. Predictor variable selection is performed through coefficient graphic method to avoid multicollinearity problem and to achieve high prediction accuracy and compression rate. The method is applied on MRI and CT images. Results show that the proposed approach gives a high compression rate for MRI and CT images comparing with state of the art methods.

  9. Efficient parallel Levenberg-Marquardt model fitting towards real-time automated parametric imaging microscopy.

    Science.gov (United States)

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy.

  10. Fully Automated Prostate Magnetic Resonance Imaging and Transrectal Ultrasound Fusion via a Probabilistic Registration Metric

    OpenAIRE

    Sparks, Rachel; Bloch, B. Nicolas; Feleppa, Ernest; Barratt, Dean; Madabhushi, Anant

    2013-01-01

    In this work, we present a novel, automated, registration method to fuse magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) images of the prostate. Our methodology consists of: (1) delineating the prostate on MRI, (2) building a probabilistic model of prostate location on TRUS, and (3) aligning the MRI prostate segmentation to the TRUS probabilistic model. TRUS-guided needle biopsy is the current gold standard for prostate cancer (CaP) diagnosis. Up to 40% of CaP lesions appea...

  11. Tele-medical imaging conference system based on the Web.

    Science.gov (United States)

    Choi, Heung-Kook; Park, Se-Myung; Kang, Jae-Hyo; Kim, Sang-Kyoon; Choi, Hang-Mook

    2002-06-01

    In this paper, a medical imaging conference system is presented, which is carried out in the Web environment using the distributed object technique, CORBA. Independent of platforms and different developing languages, the CORBA-based medical imaging conference system is very powerful for system development, extension and maintenance. With this Web client/server, one could easily execute a medical imaging conference using Applets on the Web. The Java language, which is object-oriented and independent of platforms, has the advantage of free usage wherever the Web browser is. By using the proposed system, we envisage being able to open a tele-conference using medical images, e.g. CT, MRI, X-ray etc., easily and effectively among remote hospitals.

  12. Lossy Compression Color Medical Image Using CDF Wavelet Lifting Scheme

    Directory of Open Access Journals (Sweden)

    M. beladghem

    2013-09-01

    Full Text Available As the coming era is that of digitized medical information, an important challenge to deal with is the storage and transmission requirements of enormous data, including color medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose an algorithm for color medical image compression based on a biorthogonal wavelet transform CDF 9/7 coupled with SPIHT coding algorithm, of which we applied the lifting structure to improve the drawbacks of wavelet transform. In order to enhance the compression by our algorithm, we have compared the results obtained with wavelet based filters bank. Experimental results show that the proposed algorithm is superior to traditional methods in both lossy and lossless compression for all tested color images. Our algorithm provides very important PSNR and MSSIM values for color medical images.

  13. Four challenges in medical image analysis from an industrial perspective.

    Science.gov (United States)

    Weese, Jürgen; Lorenz, Cristian

    2016-10-01

    Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications.

  14. Automated interpretation of optic nerve images: a data mining framework for glaucoma diagnostic support.

    Science.gov (United States)

    Abidi, Syed S R; Artes, Paul H; Yun, Sanjan; Yu, Jin

    2007-01-01

    Confocal Scanning Laser Tomography (CSLT) techniques capture high-quality images of the optic disc (the retinal region where the optic nerve exits the eye) that are used in the diagnosis and monitoring of glaucoma. We present a hybrid framework, combining image processing and data mining methods, to support the interpretation of CSLT optic nerve images. Our framework features (a) Zernike moment methods to derive shape information from optic disc images; (b) classification of optic disc images, based on shape information, to distinguish between healthy and glaucomatous optic discs. We apply Multi Layer Perceptrons, Support Vector Machines and Bayesian Networks for feature sub-set selection and image classification; and (c) clustering of optic disc images, based on shape information, using Self-Organizing Maps to visualize sub-types of glaucomatous optic disc damage. Our framework offers an automated and objective analysis of optic nerve images that can potentially support both diagnosis and monitoring of glaucoma.

  15. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  16. Accuracy Validation for Medical Image Registration Algorithms: a Review

    Institute of Scientific and Technical Information of China (English)

    Zhe Liu; Xiang Deng; Guang-zhi Wang

    2012-01-01

    Accuracy validation is essential to clinical application of medical image registration techniques.Registration validation remains a challenging problem in practice mainly due to lack of 'ground truth'.In this paper,an overview of current validation methods for medical image registration is presented with detailed discussion of their benefits and drawbacks.Special focus is on non-rigid registration validation.Promising solution is also discussed.

  17. An introduction to medical imaging with coherent terahertz frequency radiation.

    Science.gov (United States)

    Fitzgerald, A J; Berry, E; Zinovev, N N; Walker, G C; Smith, M A; Chamberlain, J M

    2002-04-07

    Methods have recently been developed that make use of electromagnetic radiation at terahertz (THz) frequencies, the region of the spectrum between millimetre wavelengths and the infrared, for imaging purposes. Radiation at these wavelengths is non-ionizing and subject to far less Rayleigh scatter than visible or infrared wavelengths, making it suitable for medical applications. This paper introduces THz pulsed imaging and discusses its potential for in vivo medical applications in comparison with existing modalities.

  18. Optimal Embedding for Shape Indexing in Medical Image Databases

    OpenAIRE

    Qian, Xiaoning; Tagare, Hemant D.; Fulbright, Robert K.; Long, Rodney; Antani, Sameer

    2010-01-01

    This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper pr...

  19. Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction

    Science.gov (United States)

    Holan, Scott H.; Viator, John A.

    2007-02-01

    Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.

  20. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  1. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  2. A performance analysis system for MEMS using automated imaging methods

    Energy Technology Data Exchange (ETDEWEB)

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  3. Computing support for advanced medical data analysis and imaging

    CERN Document Server

    Wiślicki, W; Białas, P; Czerwiński, E; Kapłon, Ł; Kochanowski, A; Korcyl, G; Kowal, J; Kowalski, P; Kozik, T; Krzemień, W; Molenda, M; Moskal, P; Niedźwiecki, S; Pałka, M; Pawlik, M; Raczyński, L; Rudy, Z; Salabura, P; Sharma, N G; Silarski, M; Słomski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Zieliński, M; Zoń, N

    2014-01-01

    We discuss computing issues for data analysis and image reconstruction of PET-TOF medical scanner or other medical scanning devices producing large volumes of data. Service architecture based on the grid and cloud concepts for distributed processing is proposed and critically discussed.

  4. VirtualShave: automated hair removal from digital dermatoscopic images.

    Science.gov (United States)

    Fiorese, M; Peserico, E; Silletti, A

    2011-01-01

    VirtualShave is a novel tool to remove hair from digital dermatoscopic images. First, individual hairs are identified using a top-hat filter followed by morphological postprocessing. Then, they are replaced through PDE-based inpainting with an estimate of the underlying occluded skin. VirtualShave's performance is comparable to that of a human operator removing hair manually, and the resulting images are almost indistinguishable from those of hair-free skin.

  5. Ontology modularization to improve semantic medical image annotation.

    Science.gov (United States)

    Wennerberg, Pinar; Schulz, Klaus; Buitelaar, Paul

    2011-02-01

    Searching for medical images and patient reports is a significant challenge in a clinical setting. The contents of such documents are often not described in sufficient detail thus making it difficult to utilize the inherent wealth of information contained within them. Semantic image annotation addresses this problem by describing the contents of images and reports using medical ontologies. Medical images and patient reports are then linked to each other through common annotations. Subsequently, search algorithms can more effectively find related sets of documents on the basis of these semantic descriptions. A prerequisite to realizing such a semantic search engine is that the data contained within should have been previously annotated with concepts from medical ontologies. One major challenge in this regard is the size and complexity of medical ontologies as annotation sources. Manual annotation is particularly time consuming labor intensive in a clinical environment. In this article we propose an approach to reducing the size of clinical ontologies for more efficient manual image and text annotation. More precisely, our goal is to identify smaller fragments of a large anatomy ontology that are relevant for annotating medical images from patients suffering from lymphoma. Our work is in the area of ontology modularization, which is a recent and active field of research. We describe our approach, methods and data set in detail and we discuss our results.

  6. A survey of medical image registration - under review.

    Science.gov (United States)

    Viergever, Max A; Maintz, J B Antoine; Klein, Stefan; Murphy, Keelin; Staring, Marius; Pluim, Josien P W

    2016-10-01

    A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects.

  7. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Science.gov (United States)

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  8. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    Science.gov (United States)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  9. A similarity-based data warehousing environment for medical images.

    Science.gov (United States)

    Teixeira, Jefferson William; Annibal, Luana Peixoto; Felipe, Joaquim Cezar; Ciferri, Ricardo Rodrigues; Ciferri, Cristina Dutra de Aguiar

    2015-11-01

    A core issue of the decision-making process in the medical field is to support the execution of analytical (OLAP) similarity queries over images in data warehousing environments. In this paper, we focus on this issue. We propose imageDWE, a non-conventional data warehousing environment that enables the storage of intrinsic features taken from medical images in a data warehouse and supports OLAP similarity queries over them. To comply with this goal, we introduce the concept of perceptual layer, which is an abstraction used to represent an image dataset according to a given feature descriptor in order to enable similarity search. Based on this concept, we propose the imageDW, an extended data warehouse with dimension tables specifically designed to support one or more perceptual layers. We also detail how to build an imageDW and how to load image data into it. Furthermore, we show how to process OLAP similarity queries composed of a conventional predicate and a similarity search predicate that encompasses the specification of one or more perceptual layers. Moreover, we introduce an index technique to improve the OLAP query processing over images. We carried out performance tests over a data warehouse environment that consolidated medical images from exams of several modalities. The results demonstrated the feasibility and efficiency of our proposed imageDWE to manage images and to process OLAP similarity queries. The results also demonstrated that the use of the proposed index technique guaranteed a great improvement in query processing.

  10. Automated Contour Detection for Intravascular Ultrasound Image Sequences Based on Fast Active Contour Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Hai-yan; WANG Hui-nan

    2006-01-01

    Intravascular ultrasound can provide high-resolution real-time crosssectional images about lumen, plaque and tissue. Traditionally, the luminal border and medial-adventitial border are traced manually. This process is extremely timeconsuming and the subjective difference would be large. In this paper, a new automated contour detection method is introduced based on fast active contour model.Experimental results found that lumen and vessel area measurements after automated detection showed good agreement with manual tracings with high correlation coefficients (0.94 and 0.95, respectively) and small system difference ( -0.32 and 0.56, respectively). So it can be a reliable and accurate diagnostic tool.

  11. Crowdsourcing scoring of immunohistochemistry images: Evaluating Performance of the Crowd and an Automated Computational Method

    Science.gov (United States)

    Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.

    2017-01-01

    The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies. PMID:28230179

  12. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  13. Automated Classification of Glaucoma Images by Wavelet Energy Features

    Directory of Open Access Journals (Sweden)

    N.Annu

    2013-04-01

    Full Text Available Glaucoma is the second leading cause of blindness worldwide. As glaucoma progresses, more optic nerve tissue is lost and the optic cup grows which leads to vision loss. This paper compiles a systemthat could be used by non-experts to filtrate cases of patients not affected by the disease. This work proposes glaucomatous image classification using texture features within images and efficient glaucoma classification based on Probabilistic Neural Network (PNN. Energy distribution over wavelet sub bands is applied to compute these texture features. Wavelet features were obtained from the daubechies (db3, symlets (sym3, and biorthogonal (bio3.3, bio3.5, and bio3.7 wavelet filters. It uses a technique to extract energy signatures obtained using 2-D discrete wavelet transform and the energy obtained from the detailed coefficients can be used to distinguish between normal and glaucomatous images. We observedan accuracy of around 95%, this demonstrates the effectiveness of these methods.

  14. System and method for automated object detection in an image

    Energy Technology Data Exchange (ETDEWEB)

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  15. Automated Structure Detection in HRTEM Images: An Example with Graphene

    DEFF Research Database (Denmark)

    Kling, Jens; Vestergaard, Jacob Schack; Dahl, Anders Bjorholm

    of time making it difficult to resolve dynamic processes or unstable structures. Tools that assist to get the maximum of information out of recorded images are therefore greatly appreciated. In order to get the most accurate results out of the structure detection, we have optimized the imaging conditions...... used for the FEI Titan ETEM with a monochromator and an objective-lens Cs-corrector. To reduce the knock-on damage of the carbon atoms in the graphene structure, the microscope was operated at 80kV. As this strongly increases the influence of the chromatic aberration of the lenses, the energy spread...

  16. Automatic medical X-ray image classification using annotation.

    Science.gov (United States)

    Zare, Mohammad Reza; Mueen, Ahmed; Seng, Woo Chaw

    2014-02-01

    The demand for automatically classification of medical X-ray images is rising faster than ever. In this paper, an approach is presented to gain high accuracy rate for those classes of medical database with high ratio of intraclass variability and interclass similarities. The classification framework was constructed via annotation using the following three techniques: annotation by binary classification, annotation by probabilistic latent semantic analysis, and annotation using top similar images. Next, final annotation was constructed by applying ranking similarity on annotated keywords made by each technique. The final annotation keywords were then divided into three levels according to the body region, specific bone structure in body region as well as imaging direction. Different weights were given to each level of the keywords; they are then used to calculate the weightage for each category of medical images based on their ground truth annotation. The weightage computed from the generated annotation of query image was compared with the weightage of each category of medical images, and then the query image would be assigned to the category with closest weightage to the query image. The average accuracy rate reported is 87.5 %.

  17. In-vivo synthetic aperture flow imaging in medical ultrasound

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2003-01-01

    A new method for acquiring flow images using synthetic aperture techniques in medical ultrasound is presented. The new approach makes it possible to have a continuous acquisition of flow data throughout the whole image simultaneously, and this can significantly improve blood velocity estimation...

  18. The Application of Partial Differential Equations in Medical Image Processing

    Directory of Open Access Journals (Sweden)

    Mohammad Madadpour Inallou

    2013-10-01

    Full Text Available Mathematical models are the foundation of biomedical computing. Partial Differential Equations (PDEs in Medical Imaging is concerned with acquiring images of the body for research, diagnosis and treatment. Biomedical Image Processing and its influence has undergoing a revolution in the past decade. Image processing has become an important component in contemporary science and technology and has been an interdisciplinary research field attracting expertise from applied mathematics, biology, computer sciences, engineering, statistics, microscopy, radiologic sciences, physics, medicine and etc. Medical imaging equipment is taking on an increasingly critical role in healthcare as the industry strives to lower patient costs and achieve earlier disease prediction using noninvasive means. The subsections of medical imaging are categorized to two: Conventional (X-Ray and Ultrasound and Computed (CT, MRI, fMRI, SPECT, PET and etc. This paper is organized as fallow: First section describes some kind of image processing. Second section is about techniques and requirements, and in the next sections the proceeding of Analyzing, Smoothing, Segmentation, De-noising and Registration in Medical Image Processing Equipment by PDEs Framework will be regarded

  19. Multi-scale visual words for hierarchical medical image categorisation

    Science.gov (United States)

    Markonis, Dimitrios; Seco de Herrera, Alba G.; Eggel, Ivan; Müller, Henning

    2012-02-01

    The biomedical literature published regularly has increased strongly in past years and keeping updated even in narrow domains is difficult. Images represent essential information of their articles and can help to quicker browse through large volumes of articles in connection with keyword search. Content-based image retrieval is helping the retrieval of visual content. To facilitate retrieval of visual information, image categorisation can be an important first step. To represent scientific articles visually, medical images need to be separated from general images such as flowcharts or graphs to facilitate browsing, as graphs contain little information. Medical modality classification is a second step to focus search. The techniques described in this article first classify images into broad categories. In a second step the images are further classified into the exact medical modalities. The system combines the Scale-Invariant Feature Transform (SIFT) and density-based clustering (DENCLUE). Visual words are first created globally to differentiate broad categories and then within each category a new visual vocabulary is created for modality classification. The results show the difficulties to differentiate between some modalities by visual means alone. On the other hand the improvement of the accuracy of the two-step approach shows the usefulness of the method. The system is currently being integrated into the Goldminer image search engine of the ARRS (American Roentgen Ray Society) as a web service, allowing concentrating image search onto clinically relevant images automatically.

  20. An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments

    Science.gov (United States)

    Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.

    2015-01-01

    The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.

  1. Computer-assisted tree taxonomy by automated image recognition

    NARCIS (Netherlands)

    Pauwels, E.J.; Zeeuw, P.M.de; Ranguelova, E.B.

    2009-01-01

    We present an algorithm that performs image-based queries within the domain of tree taxonomy. As such, it serves as an example relevant to many other potential applications within the field of biodiversity and photo-identification. Unsupervised matching results are produced through a chain of comput

  2. Automated identification of retained surgical items in radiological images

    Science.gov (United States)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  3. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    Science.gov (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  4. AUTOMATED VIDEO IMAGE MORPHOMETRY OF THE CORNEAL ENDOTHELIUM

    NARCIS (Netherlands)

    SIERTSEMA, JV; LANDESZ, M; VANDENBROM, H; VANRIJ, G

    1993-01-01

    The central corneal endothelium of 13 eyes in 13 subjects was visualized with a non-contact specular microscope. This report describes the computer-assisted morphometric analysis of enhanced digitized images, using a direct input by means of a frame grabber. The output consisted of mean cell area, c

  5. Principal Components Analysis In Medical Imaging

    Science.gov (United States)

    Weaver, J. B.; Huddleston, A. L.

    1986-06-01

    Principal components analysis, PCA, is basically a data reduction technique. PCA has been used in several problems in diagnostic radiology: processing radioisotope brain scans (Ref.1), automatic alignment of radionuclide images (Ref. 2), processing MRI images (Ref. 3,4), analyzing first-pass cardiac studies (Ref. 5) correcting for attenuation in bone mineral measurements (Ref. 6) and in dual energy x-ray imaging (Ref. 6,7). This paper will progress as follows; a brief introduction to the mathematics of PCA will be followed by two brief examples of how PCA has been used in the literature. Finally my own experience with PCA in dual-energy x-ray imaging will be given.

  6. Medical Imaging of Mummies and Bog Bodies

    DEFF Research Database (Denmark)

    Lynnerup, Niels

    2010-01-01

    focused on the development and application of non-destructive methods for examining mummies, especially radiography and CT scanning with advanced 3D visualizations. Indeed, the development of commercially available CT scanners in the 1970s meant that for the first time the 3D internal structure of mummies...... severely degraded, bone is quite readily visualized, but accurate imaging of preserved soft tissues, and pathological lesions therein, may require considerable post-image capture processing of CT data....

  7. Model Observers in Medical Imaging Research

    OpenAIRE

    He, Xin; Park, Subok

    2013-01-01

    Model observers play an important role in the optimization and assessment of imaging devices. In this review paper, we first discuss the basic concepts of model observers, which include the mathematical foundations and psychophysical considerations in designing both optimal observers for optimizing imaging systems and anthropomorphic observers for modeling human observers. Second, we survey a few state-of-the-art computational techniques for estimating model observers and the principles of im...

  8. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, E. [Fraunhofer Institute for Production Systems and Design Technology (IPK), Berlin (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Fast, M.F.; Nill, S. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; Oelfke, U. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; German Cancer Research Center (DKFZ), Heidelberg (Germany)

    2015-09-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso {sup registered} beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  9. Lossless compression of medical images using Hilbert scan

    Science.gov (United States)

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang

    2007-12-01

    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  10. Automated Detection of Contaminated Radar Image Pixels in Mountain Areas

    Institute of Scientific and Technical Information of China (English)

    LIU Liping; Qin XU; Pengfei ZHANG; Shun LIU

    2008-01-01

    In mountain areas,radar observations are often contaminated(1)by echoes from high-speed moving vehicles and(2)by point-wise ground clutter under either normal propagation(NP)or anomalous propa-gation(AP)conditions.Level II data are collected from KMTX(Salt Lake City,Utah)radar to analyze these two types of contamination in the mountain area around the Great Salt Lake.Human experts provide the"ground truth"for possible contamination of either type on each individual pixel.Common features are then extracted for contaminated pixels of each type.For example,pixels contaminated by echoes from high-speed moving vehicles are characterized by large radial velocity and spectrum width.Echoes from a moving train tend to have larger velocity and reflectivity but smaller spectrum width than those from moving vehicles on highways.These contaminated pixels are only seen in areas of large terrain gradient(in the radial direction along the radar beam).The same is true for the second type of contamination-point-wise ground clutters.Six quality control(QC)parameters are selected to quantify the extracted features.Histograms are computed for each QC parameter and grouped for contaminated pixels of each type and also for non-contaminated pixels.Based on the computed histograms,a fuzzy logical algorithm is developed for automated detection of contaminated pixels.The algorithm is tested with KMTX radar data under different(clear and rainy)weather conditions.

  11. Diagonal queue medical image steganography with Rabin cryptosystem.

    Science.gov (United States)

    Jain, Mamta; Lenka, Saroj Kumar

    2016-03-01

    The main purpose of this work is to provide a novel and efficient method to the image steganography area of research in the field of biomedical, so that the security can be given to the very precious and confidential sensitive data of the patient and at the same time with the implication of the highly reliable algorithms will explode the high security to the precious brain information from the intruders. The patient information such as patient medical records with personal identification information of patients can be stored in both storage and transmission. This paper describes a novel methodology for hiding medical records like HIV reports, baby girl fetus, and patient's identity information inside their Brain disease medical image files viz. scan image or MRI image using the notion of obscurity with respect to a diagonal queue least significant bit substitution. Data structure queue plays a dynamic role in resource sharing between multiple communication parties and when secret medical data are transferred asynchronously (secret medical data not necessarily received at the same rate they were sent). Rabin cryptosystem is used for secret medical data writing, since it is computationally secure against a chosen-plaintext attack and shows the difficulty of integer factoring. The outcome of the cryptosystem is organized in various blocks and equally distributed sub-blocks. In steganography process, various Brain disease cover images are organized into various blocks of diagonal queues. The secret cipher blocks and sub-blocks are assigned dynamically to selected diagonal queues for embedding. The receiver gets four values of medical data plaintext corresponding to one ciphertext, so only authorized receiver can identify the correct medical data. Performance analysis was conducted using MSE, PSNR, maximum embedding capacity as well as by histogram analysis between various Brain disease stego and cover images.

  12. Research on medical image encryption in telemedicine systems.

    Science.gov (United States)

    Dai, Yin; Wang, Huanzhen; Zhou, Zixia; Jin, Ziyi

    2016-04-29

    Recently, advances in computers and high-speed communication tools have led to enhancements in remote medical consultation research. Laws in some localities require hospitals to encrypt patient information (including images of the patient) before transferring the data over a network. Therefore, developing suitable encryption algorithms is quite important for modern medicine. This paper demonstrates a digital image encryption algorithm based on chaotic mapping, which uses the no-period and no-convergence properties of a chaotic sequence to create image chaos and pixel averaging. Then, the chaotic sequence is used to encrypt the image, thereby improving data security. With this method, the security of data and images can be improved.

  13. 3D/2D Registration of medical images

    OpenAIRE

    Tomaževič, D.

    2008-01-01

    The topic of this doctoral dissertation is registration of 3D medical images to corresponding projective 2D images, referred to as 3D/2D registration. There are numerous possible applications of 3D/2D registration in image-aided diagnosis and treatment. In most of the applications, 3D/2D registration provides the location and orientation of the structures in a preoperative 3D CT or MR image with respect to intraoperative 2D X-ray images. The proposed doctoral dissertation tries to find origin...

  14. Automated segmentation of regions of interest in whole slide skin histopathological images.

    Science.gov (United States)

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2015-01-01

    In the diagnosis of skin melanoma by analyzing histopathological images, the epidermis and epidermis-dermis junctional areas are regions of interest as they provide the most important histologic diagnosis features. This paper presents an automated technique for segmenting epidermis and dermis regions from whole slide skin histopathological images. The proposed technique first performs epidermis segmentation using a thresholding and thickness measurement based method. The dermis area is then segmented based on a predefined depth of segmentation from the epidermis outer boundary. Experimental results on 66 different skin images show that the proposed technique can robustly segment regions of interest as desired.

  15. Advanced techniques in medical image segmentation of the liver

    OpenAIRE

    López Mir, Fernando

    2016-01-01

    [EN] Image segmentation is, along with multimodal and monomodal registration, the operation with the greatest applicability in medical image processing. There are many operations and filters, as much as applications and cases, where the segmentation of an organic tissue is the first step. The case of liver segmentation in radiological images is, after the brain, that on which the highest number of scientific publications can be found. This is due, on the one hand, to the need to continue inno...

  16. Active index for content-based medical image retrieval.

    Science.gov (United States)

    Chang, S K

    1996-01-01

    This paper introduces the active index for content-based medical image retrieval. The dynamic nature of the active index is its most important characteristic. With an active index, we can effectively and efficiently handle smart images that respond to accessing, probing and other actions. The main applications of the active index are to prefetch image and multimedia data, and to facilitate similarity retrieval. The experimental active index system is described.

  17. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  18. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    Science.gov (United States)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  19. Medical image segmentation based on SLIC superpixels model

    Science.gov (United States)

    Chen, Xiang-ting; Zhang, Fan; Zhang, Ruo-ya

    2017-01-01

    Medical imaging has been widely used in clinical practice. It is an important basis for medical experts to diagnose the disease. However, medical images have many unstable factors such as complex imaging mechanism, the target displacement will cause constructed defect and the partial volume effect will lead to error and equipment wear, which increases the complexity of subsequent image processing greatly. The segmentation algorithm which based on SLIC (Simple Linear Iterative Clustering, SLIC) superpixels is used to eliminate the influence of constructed defect and noise by means of the feature similarity in the preprocessing stage. At the same time, excellent clustering effect can reduce the complexity of the algorithm extremely, which provides an effective basis for the rapid diagnosis of experts.

  20. Medical Images Watermarking Algorithm Based on Improved DCT

    Directory of Open Access Journals (Sweden)

    Yv-fan SHANG

    2013-12-01

    Full Text Available Targeting at the incessant securities problems of digital information management system in modern medical system, this paper presents the robust watermarking algorithm for medical images based on Arnold transformation and DCT. The algorithm first deploys the scrambling technology to encrypt the watermark information and then combines it with the visual feature vector of the image to generate a binary logic series through the hash function. The sequence as taken as keys and stored in the third party to obtain ownership of the original image. Having no need for artificial selection of a region of interest, no capacity constraint, no participation of the original medical image, such kind of watermark extracting solves security and speed problems in the watermark embedding and extracting. The simulation results also show that the algorithm is simple in operation and excellent in robustness and invisibility. In a word, it is more practical compared with other algorithms

  1. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  2. Automated Detection and Removal of Cloud Shadows on HICO Images

    Science.gov (United States)

    2011-01-01

    Gross, F. Moshary and S. Ahmed, "Impacts of atmospheric corrections on algal bloom detection techniques," 89th AMS Annual Meeting, Phoenix, Arizona... Remote Sens. 36, 880-897, (1998). 4] R. Amin, J. Zhou, A. Gilerson, B. Gross, F. Moshary and S. Ahmed, "Novel optical techniques for detecting and...32157 (1998). 11]J. Cihlar, J. Howarth, " Detection and removal of cloud contamination from AVHRR images," IEEE Trans. Geos. Remote Sens., 32, 583

  3. Medical image of the week: polysomnogram artifact

    Directory of Open Access Journals (Sweden)

    Bartell J

    2015-02-01

    Full Text Available A 54 year-old man with a past medical history of attention deficit hyperactivity disorder (ADHD, low back pain, and paroxysmal supraventricular tachycardia presented to the sleep laboratory for evaluation of sleep disordered breathing. Pertinent medications include fluoxetine, ambien, and clonazepam. His Epworth sleepiness score was 18. He had a total sleep time of 12 min. On the night of his sleep study, the patient was restless and repeatedly changed positions in bed. Figures 1 and 2 show the artifact determined to be lead displacement of O1M2 after the patient shifted in bed, inadvertently removing one of his scalp electrodes. The sine waves are 60 Hz in frequency. Once the problem was identified, the lead was quickly replaced to its proper position.

  4. Nonlocal Means-Based Denoising for Medical Images

    Directory of Open Access Journals (Sweden)

    Ke Lu

    2012-01-01

    Full Text Available Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. Nonlocal means (NL-means method provides a powerful framework for denoising. In this work, we investigate an adaptive denoising scheme based on the patch NL-means algorithm for medical imaging denoising. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means denoising scheme has three unique features. First, we use a restricted local neighbourhood where the true intensity for each noisy pixel is estimated from a set of selected neighbouring pixels to perform the denoising process. Second, the weights used are calculated thanks to the similarity between the patch to denoise and the other patches candidates. Finally, we apply the steering kernel to preserve the details of the images. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical medical images showing an improved performance in all cases analyzed.

  5. Remote Minimally Invasive Surgery – Haptic Feedback and Selective Automation in Medical Robotics

    Directory of Open Access Journals (Sweden)

    Christoph Staub

    2011-01-01

    Full Text Available The automation of recurrent tasks and force feedback are complex problems in medical robotics. We present a novel approach that extends human-machine skill-transfer by a scaffolding framework. It assumes a consolidated working environment for both, the trainee and the trainer. The trainer provides hints and cues in a basic structure which is already understood by the learner. In this work, the scaffolding is constituted by abstract patterns, which facilitate the structuring and segmentation of information during “Learning by Demonstration” (LbD. With this concept, the concrete example of knot-tying for suturing is exemplified and evaluated. During the evaluation, most problems and failures arose due to intrinsic system imprecisions of the medical robot system. These inaccuracies were then improved by the visual guidance of the surgical instruments. While the benefits of force feedback in telesurgery has already been demonstrated and measured forces are also used during task learning, the transmission of signals between the operator console and the robot system over long-distances or across-network remote connections is still a challenge due to time-delay. Especially during incision processes with a scalpel into tissue, a delayed force feedback yields to an unpredictable force perception at the operator-side and can harm the tissue which the robot is interacting with. We propose a XFEM-based incision force prediction algorithm that simulates the incision contact-forces in real-time and compensates the delayed force sensor readings. A realistic 4-arm system for minimally invasive robotic heart surgery is used as a platform for the research.

  6. Optical medical imaging: from glass to man

    Science.gov (United States)

    Bradley, Mark

    2016-11-01

    A formidable challenge in modern respiratory healthcare is the accurate and timely diagnosis of lung infection and inflammation. The EPSRC Interdisciplinary Research Collaboration (IRC) `Proteus' seeks to address this challenge by developing an optical fibre based healthcare technology platform that combines physiological sensing with multiplexed optical molecular imaging. This technology will enable in situ measurements deep in the human lung allowing the assessment of tissue function and characterization of the unique signatures of pulmonary disease and is illustrated here with our in-man application of Optical Imaging SmartProbes and our first device Versicolour.

  7. Quantification of Structure from Medical Images

    DEFF Research Database (Denmark)

    Qazi, Arish Asif

    , segmented from MR images of the knee. The cartilage tissue is considered to be a key determinant in the onset of Osteoarthritis (OA), a degenerative joint disease, with no known cure. The primary obstacle has been the dependence on radiography as the ‘gold standard’ for detecting the manifestation...... based on diffusion tensor imaging, a technique widely used for analysis of the white matter of the central nervous system in the living human brain. An inherent drawback of the traditional diffusion tensor model is its limited ability to provide detailed information about multi-directional fiber...

  8. Registering multiple medical images using the shared chain mutual information

    Institute of Scientific and Technical Information of China (English)

    Jing Jin; Qiang Wang; Yi Shen

    2007-01-01

    @@ A new approach to the simultaneous registration of multiple medical images is proposed using shared chain mutual information (SCMI) as the matching measure. The presented method applies SCMI to measure the shared information between the multiple images. Registration is achieved by adjusting the relative position of the floating image until the SCMI between all the images is maximized. Using this measure, we registered three and four simulated magnetic resonance imaging (MRI) images using downhill simplex optimization to search for the optimal transformation parameters. Accuracy and validity of the proposed method for multiple-image registration are testified by comparing the results with that of twoimage registration. Furthermore, the performance of the proposed method is validated by registering the real ultrasonic image sequence.

  9. SVM for density estimation and application to medical image segmentation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhao; ZHANG Su; ZHANG Chen-xi; CHEN Ya-zhu

    2006-01-01

    A method of medical image segmentation based on support vector machine (SVM) for density estimation is presented. We used this estimator to construct a prior model of the image intensity and curvature profile of the structure from training images. When segmenting a novel image similar to the training images, the technique of narrow level set method is used. The higher dimensional surface evolution metric is defined by the prior model instead of by energy minimization function. This method offers several advantages. First, SVM for density estimation is consistent and its solution is sparse. Second, compared to the traditional level set methods, this method incorporates shape information on the object to be segmented into the segmentation process.Segmentation results are demonstrated on synthetic images, MR images and ultrasonic images.

  10. Oncological image analysis: medical and molecular image analysis

    Science.gov (United States)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  11. Extended Field Laser Confocal Microscopy (EFLCM: Combining automated Gigapixel image capture with in silico virtual microscopy

    Directory of Open Access Journals (Sweden)

    Strandh Christer

    2008-07-01

    Full Text Available Abstract Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM. Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA instrument for automated screening processes.

  12. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    Directory of Open Access Journals (Sweden)

    Pat Terletzky

    Full Text Available Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus and horses (Equus caballus in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  13. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    Science.gov (United States)

    Terletzky, Pat; Ramsey, Robert Douglas

    2014-01-01

    Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus) and horses (Equus caballus) in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  14. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  15. Automated analysis of craniofacial morphology using magnetic resonance images.

    Directory of Open Access Journals (Sweden)

    M Mallar Chakravarty

    Full Text Available Quantitative analysis of craniofacial morphology is of interest to scholars working in a wide variety of disciplines, such as anthropology, developmental biology, and medicine. T1-weighted (anatomical magnetic resonance images (MRI provide excellent contrast between soft tissues. Given its three-dimensional nature, MRI represents an ideal imaging modality for the analysis of craniofacial structure in living individuals. Here we describe how T1-weighted MR images, acquired to examine brain anatomy, can also be used to analyze facial features. Using a sample of typically developing adolescents from the Saguenay Youth Study (N = 597; 292 male, 305 female, ages: 12 to 18 years, we quantified inter-individual variations in craniofacial structure in two ways. First, we adapted existing nonlinear registration-based morphological techniques to generate iteratively a group-wise population average of craniofacial features. The nonlinear transformations were used to map the craniofacial structure of each individual to the population average. Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features. Second, we employed a landmark-based approach to quantify variations in face surfaces. This approach involves: (a placing 56 landmarks (forehead, nose, lips, jaw-line, cheekbones, and eyes on a surface representation of the MRI-based group average; (b warping the landmarks to the individual faces using the inverse nonlinear transformation estimated for each person; and (3 using a principal components analysis (PCA of the warped landmarks to identify facial features (i.e. clusters of landmarks that vary in our sample in a correlated fashion. As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features. Both methods demonstrated significant sexual dimorphism in

  16. Image cytometer method for automated assessment of human spermatozoa concentration

    DEFF Research Database (Denmark)

    Egeberg, D L; Kjaerulff, S; Hansen, C

    2013-01-01

    to investigator bias. Here we show that image cytometry can be used to accurately measure the sperm concentration of human semen samples with great ease and reproducibility. The impact of several factors (pipetting, mixing, round cell content, sperm concentration), which can influence the read-out as well......In the basic clinical work-up of infertile couples, a semen analysis is mandatory and the sperm concentration is one of the most essential variables to be determined. Sperm concentration is usually assessed by manual counting using a haemocytometer and is hence labour intensive and may be subjected...... and easy measurement of human sperm concentration....

  17. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo

    2015-01-01

    Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents...... tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists...

  18. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...

  19. Automated Image-Based Procedures for Adaptive Radiotherapy

    DEFF Research Database (Denmark)

    Bjerre, Troels

    -tissue complication probability (NTCP), margins used to account for interfraction and intrafraction anatomical changes and motion need to be reduced. This can only be achieved through proper treatment plan adaptations and intrafraction motion management. This thesis describes methods in support of image...... to encourage bone rigidity and local tissue volume change only in the gross tumour volume and the lungs. This is highly relevant in adaptive radiotherapy when modelling significant tumour volume changes. - It is described how cone beam CT reconstruction can be modelled as a deformation of a planning CT scan...

  20. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo;

    2015-01-01

    in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10−13) and estimated to be 1.01 (95% CI: 0.85; 1...... tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists...

  1. MEDICAL IMAGE SEGMENTATION FOR ANATOMICAL KNOWLEDGE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Ms Maya Eapen

    2014-01-01

    Full Text Available Computed Tomography-Angiography (CTA images of the abdomen, followed by precise segmentation and subsequent computation of shape based features of liver play an important role in hepatic surgery, patient/donor diagnosis during liver transplantation and at various treatment stages. Nevertheless, the issues like intensity similarity and Partial Volume Effect (PVE between the neighboring organs; left the task of liver segmentation critical. The accurate segmentation of liver helps the surgeons to perfectly classify the patients based on their liver anatomy which in turn helps them in the treatment decision phase. In this study, we propose an effective Advanced Region Growing (ARG algorithm for segmentation of liver from CTA images. The performance of the proposed technique was tested with several CTA images acquired across a wide range of patients. The proposed ARG algorithm identifies the liver regions on the images based on the statistical features (intensity distribution and orientation value. The proposed technique addressed the aforementioned issues and been evaluated both quantitatively and qualitatively. For quantitative analysis proposed method was compared with manual segmentation (gold standard. The method was also compared with standard region growing.

  2. Automated detection of diabetic retinopathy in retinal images

    Directory of Open Access Journals (Sweden)

    Carmen Valverde

    2016-01-01

    Full Text Available Diabetic retinopathy (DR is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  3. Watermarking techniques used in medical images: a survey.

    Science.gov (United States)

    Mousavi, Seyed Mojtaba; Naghsh, Alireza; Abu-Bakar, S A R

    2014-12-01

    The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. As a result of this, there is a need for medical image watermarking (MIW). However, MIW needs to be performed with special care for two reasons. Firstly, the watermarking procedure cannot compromise the quality of the image. Secondly, confidential patient information embedded within the image should be flawlessly retrievable without risk of error after image decompressing. Despite extensive research undertaken in this area, there is still no method available to fulfill all the requirements of MIW. This paper aims to provide a useful survey on watermarking and offer a clear perspective for interested researchers by analyzing the strengths and weaknesses of different existing methods.

  4. Automated classification of female facial beauty by image analysis and supervised learning

    Science.gov (United States)

    Gunes, Hatice; Piccardi, Massimo; Jan, Tony

    2004-01-01

    The fact that perception of facial beauty may be a universal concept has long been debated amongst psychologists and anthropologists. In this paper, we performed experiments to evaluate the extent of beauty universality by asking a number of diverse human referees to grade a same collection of female facial images. Results obtained show that the different individuals gave similar votes, thus well supporting the concept of beauty universality. We then trained an automated classifier using the human votes as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry and plastic surgery.

  5. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    Directory of Open Access Journals (Sweden)

    Phlypo Ronald

    2010-01-01

    Full Text Available We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  6. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    Science.gov (United States)

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  7. Medical Image Dynamic Collaborative Processing on the Distributed Environment

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new trend in the development of medical image processing systems is to enhance the sharing of medical resources and the collaborative processing of medical specialists. This paper presents an architecture of medical image dynamic collaborative processing on the distributed environment by combining the JAVA, CORBA (Common Object Request and Broker Architecture) and the MAS (Multi-Agents System) collaborative mechanism. The architecture allows medical specialists or applications to share records and communicate with each other on the web by overcoming the shortcut of traditional approach using Common Gateway Interface (CGI) and client/server architecture, and can support the remote heterogeneous systems collaboration. The new approach improves the collaborative processing of medical data and applications and is able to enhance the interoperation among heterogeneous system. Research on the system will help the collaboration and cooperation among medical application systems distributed on the web, thus supply high quality medical service such as diagnosis and therapy to practicing specialists regardless of their actual geographic location.

  8. A New Method for Medical Image Clustering Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Akbar Shahrzad Khashandarag

    2013-01-01

    Full Text Available Segmentation is applied in medical images when the brightness of the images becomes weaker so that making different in recognizing the tissues borders. Thus, the exact segmentation of medical images is an essential process in recognizing and curing an illness. Thus, it is obvious that the purpose of clustering in medical images is the recognition of damaged areas in tissues. Different techniques have been introduced for clustering in different fields such as engineering, medicine, data mining and so on. However, there is no standard technique of clustering to present ideal results for all of the imaging applications. In this paper, a new method combining genetic algorithm and k-means algorithm is presented for clustering medical images. In this combined technique, variable string length genetic algorithm (VGA is used for the determination of the optimal cluster centers. The proposed algorithm has been compared with the k-means clustering algorithm. The advantage of the proposed method is the accuracy in selecting the optimal cluster centers compared with the above mentioned technique.

  9. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  10. Medical image of the week: sleep bruxism

    Directory of Open Access Journals (Sweden)

    Bartell J

    2015-03-01

    Full Text Available No abstract available. Article truncated at 150 words. A 42 year-old man with a past medical history of insomnia, post-traumatic stress disorder, depression and both migraine and tension headaches was referred for an overnight sleep study. He had presented to the sleep clinic with symptoms of obstructive sleep apnea. Medications included sumatriptan, amitryptiline, sertraline, and trazodone. His sleep study showed: sleep efficiency of 58.2%, apnea-hypopnea index of 33 events per hour, and arousal index of 14.5/hr. Periodic limb movement index was 29.2/hr. The time spent in the sleep stages included N1 (3.6%, N2 (72.5%, N3 (12.9%, and REM (10.9%. Figure 1 is representative of the several brief waveforms seen on his EEG and chin EMG. Sleep bruxism (SB is a type of sleep-related movement disorder that is characterized by involuntary masticatory muscle contraction resulting in grinding and clenching of the teeth and typically associated with arousals from sleep (1,2. The American academy of sleep medicine (AASM criteria for ...

  11. Medical image of the week: tracheal perforation

    Directory of Open Access Journals (Sweden)

    Parsa N

    2014-12-01

    Full Text Available A 45 year old Caucasian man with a history of HIV/AIDS was admitted for septic shock secondary to right lower lobe community acquired pneumonia. The patient’s respiratory status continued to decline requiring emergency intubation in a non-ICU setting. Four laryngoscope intubation attempts were made including an inadvertent esophageal intubation. Subsequent CT imaging revealed a tracheal defect (Figure 1, red arrow with communication to the mediastinum and air around the trachea consistent with pneumomediastinum (Figure 2, orange arrow and figure 3, yellow arrow. Pneumopericardium (figure 4, blue arrow was also evident post-intubation. The patient’s hemodynamic status remained stable. Two days following respiratory intubation subsequent chest imaging revealed resolution of the pneumomediastinum and pneumopericardium and patient continued to do well without hemodynamic compromise or presence of subcutaneous emphysema. Post-intubation tracheal perforation is a rare complication of traumatic intubation and may be managed with surgical intervention or conservative treatment (1.

  12. Congenital heart defects and medical imaging.

    Science.gov (United States)

    Gehin, Connie; Ragsdale, Lisa

    2013-01-01

    Radiologic technologists perform imaging studies that are useful in the diagnosis of congenital heart defects in infants and adults. These studies also help to monitor congenital heart defect repairs in adults. This article describes the development and functional anatomy of the heart, along with the epidemiology and anatomy of congenital heart defects. It also discusses the increasing population of adults who have congenital heart defects and the most effective modalities for diagnosing, evaluating, and monitoring congenital heart defects.

  13. Automated grading of renal cell carcinoma using whole slide imaging

    Directory of Open Access Journals (Sweden)

    Fang-Cheng Yeh

    2014-01-01

    Full Text Available Introduction: Recent technology developments have demonstrated the benefit of using whole slide imaging (WSI in computer-aided diagnosis. In this paper, we explore the feasibility of using automatic WSI analysis to assist grading of clear cell renal cell carcinoma (RCC, which is a manual task traditionally performed by pathologists. Materials and Methods: Automatic WSI analysis was applied to 39 hematoxylin and eosin-stained digitized slides of clear cell RCC with varying grades. Kernel regression was used to estimate the spatial distribution of nuclear size across the entire slides. The analysis results were correlated with Fuhrman nuclear grades determined by pathologists. Results: The spatial distribution of nuclear size provided a panoramic view of the tissue sections. The distribution images facilitated locating regions of interest, such as high-grade regions and areas with necrosis. The statistical analysis showed that the maximum nuclear size was significantly different (P < 0.001 between low-grade (Grades I and II and high-grade tumors (Grades III and IV. The receiver operating characteristics analysis showed that the maximum nuclear size distinguished high-grade and low-grade tumors with a false positive rate of 0.2 and a true positive rate of 1.0. The area under the curve is 0.97. Conclusion: The automatic WSI analysis allows pathologists to see the spatial distribution of nuclei size inside the tumors. The maximum nuclear size can also be used to differentiate low-grade and high-grade clear cell RCC with good sensitivity and specificity. These data suggest that automatic WSI analysis may facilitate pathologic grading of renal tumors and reduce variability encountered with manual grading.

  14. Medical image of the week: Boerhaave syndrome

    Directory of Open Access Journals (Sweden)

    Parsa N

    2016-06-01

    Full Text Available No abstract available. Article truncated at 150 words. A 41-year-old woman with a history of gastroesophageal reflux disease (GERD, asthma and iron deficiency anemia presented with complaints of right sided chest pain, nausea and emesis for several days prior to hospital presentation. She had also been experiencing progressive dysphagia to solids for a month preceding admission. CT chest imaging revealed mega-esophagus (Figure 1A with rupture into the right lung parenchyma and resultant abscess formation (Figure 1B and 1C. A subsequent echocardiogram also confirmed mitral valve endocarditis. An image-guided chest tube was placed in the abscess for drainage. Endoscopy was attempted but visualization was difficult due to the presence of retained food. Given her low albumin and poor nutritional state, a jejunostomy tube was placed. Follow up CT imaging with contrast through a nasogastric tube confirmed extravasation of esophageal contrast into the right lung parenchyma (Figure 1D. Blood and sputum cultures grew Candida glabrata. She was initially started on ...

  15. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    OpenAIRE

    Xiang Zhu; Dianwen Zhang

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetim...

  16. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    OpenAIRE

    Kurek,Marcin Andrzej; Piwińska, Monika; Wyrwisz, Jarosław; Wierzbicka, Agnieszka

    2015-01-01

    Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC) were conducted. The particles were...

  17. Automated Formosat Image Processing System for Rapid Response to International Disasters

    Science.gov (United States)

    Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.

    2016-06-01

    FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  18. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  19. OpenComet: an automated tool for comet assay image analysis.

    Science.gov (United States)

    Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  20. Use of an Automated Image Processing Program to Quantify Recombinant Adenovirus Particles

    Science.gov (United States)

    Obenauer-Kutner, Linda J.; Halperin, Rebecca; Ihnat, Peter M.; Tully, Christopher P.; Bordens, Ronald W.; Grace, Michael J.

    2005-02-01

    Electron microscopy has a pivotal role as an analytical tool in pharmaceutical research. However, digital image data have proven to be too large for efficient quantitative analysis. We describe here the development and application of an automated image processing (AIP) program that rapidly quantifies shape measurements of recombinant adenovirus (rAd) obtained from digitized field emission scanning electron microscope (FESEM) images. The program was written using the macro-recording features within Image-Pro® Plus software. The macro program, which is linked to a Microsoft Excel spreadsheet, consists of a series of subroutines designed to automatically measure rAd vector objects from the FESEM images. The application and utility of this macro program has enabled us to rapidly and efficiently analyze very large data sets of rAd samples while minimizing operator time.

  1. Automated pathologies detection in retina digital images based on complex continuous wavelet transform phase angles.

    Science.gov (United States)

    Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel

    2014-10-01

    An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.

  2. Creating New Medical Ontologies for Image Annotation A Case Study

    CERN Document Server

    Stanescu, Liana; Brezovan, Marius; Mihai, Cristian Gabriel

    2012-01-01

    Creating New Medical Ontologies for Image Annotation focuses on the problem of the medical images automatic annotation process, which is solved in an original manner by the authors. All the steps of this process are described in detail with algorithms, experiments and results. The original algorithms proposed by authors are compared with other efficient similar algorithms. In addition, the authors treat the problem of creating ontologies in an automatic way, starting from Medical Subject Headings (MESH). They have presented some efficient and relevant annotation models and also the basics of the annotation model used by the proposed system: Cross Media Relevance Models. Based on a text query the system will retrieve the images that contain objects described by the keywords.

  3. Flexible medical image management using service-oriented architecture.

    Science.gov (United States)

    Shaham, Oded; Melament, Alex; Barak-Corren, Yuval; Kostirev, Igor; Shmueli, Noam; Peres, Yardena

    2012-01-01

    Management of medical images increasingly involves the need for integration with a variety of information systems. To address this need, we developed Content Management Offering (CMO), a platform for medical image management supporting interoperability through compliance with standards. CMO is based on the principles of service-oriented architecture, implemented with emphasis on three areas: clarity of business process definition, consolidation of service configuration management, and system scalability. Owing to the flexibility of this platform, a small team is able to accommodate requirements of customers varying in scale and in business needs. We describe two deployments of CMO, highlighting the platform's value to customers. CMO represents a flexible approach to medical image management, which can be applied to a variety of information technology challenges in healthcare and life sciences organizations.

  4. Comment on "Perspectives of medical X-ray imaging"

    CERN Document Server

    Taibi, A; Tuffanelli, A; Gambaccini, M

    2002-01-01

    In the paper 'Perspectives of medical X-ray imaging' (Nucl. Instr. and Meth. A 466 (2001) 99) the infer, from simple approximations, that the use of HOPG monochromator has no advantage in mammography compared to existing systems. We show that in order to compare imaging properties of different X-ray sources it is necessary to evaluate the spectra after the attenuation of the tissue to be imaged. Indeed, quasi-monochromatic X-ray sources have the potential to enhance image contrast and to reduce patient dose.

  5. Method for Surface Scanning in Medical Imaging and Related Apparatus

    DEFF Research Database (Denmark)

    2015-01-01

    A method and apparatus for surface scanning in medical imaging is provided. The surface scanning apparatus comprises an image source, a first optical fiber bundle comprising first optical fibers having proximal ends and distal ends, and a first optical coupler for coupling an image from the image...... source into the proximal ends of the first optical fibers, wherein the first optical coupler comprises a plurality of lens elements including a first lens element and a second lens element, each of the plurality of lens elements comprising a primary surface facing a distal end of the first optical...... coupler, and a secondary surface facing a proximal end of the first optical coupler....

  6. Plane Wave Medical Ultrasound Imaging Using Adaptive Beamforming

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2008-01-01

    In this paper, the adaptive, minimum variance (MV) beamformer is applied to medical ultrasound imaging. The Significant resolution and contrast gain provided by the adaptive, minimum variance (MV) beamformer, introduces the possibility of plane wave (PW) ultrasound imaging. Data is obtained using...... Field H and a 7 MHz, 128-elements, linear array transducer with lambda/2-spacing. MV is compared to the conventional delay-and-sum (DS) beamformer with Boxcar and Hanning weights. Furthermore, the PW images are compared to the a conventional ultrasound image, obtained from a linear scan sequence...

  7. Medical Image distribution and visualization in a hospital using CORBA.

    Science.gov (United States)

    Moreno, Ramon Alfredo; do Santos, Marcelo; Bertozzo, Nivaldo; de Sa Rebelo, Marina; Furuie, Sergio S; Gutierrez, Marco A

    2008-01-01

    In this work it is presented the solution adopted by the Heart Institute (InCor) of Sao Paulo for medical image distribution and visualization inside the hospital's intranet as part of the PACS system. A CORBA-based image server was developed to distribute DICOM images across the hospital together with the images' report. The solution adopted allows the decoupling of the server implementation and the client. This gives the advantage of reusing the same solution in different implementation sites. Currently, the PACS system is being used on two different hospitals each one with three different environments: development, prototype and production.

  8. RootGraph: a graphic optimization tool for automated image analysis of plant roots.

    Science.gov (United States)

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J

    2015-11-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions.

  9. Automated measurement of parameters related to the deformities of lower limbs based on x-rays images.

    Science.gov (United States)

    Wojciechowski, Wadim; Molka, Adrian; Tabor, Zbisław

    2016-03-01

    Measurement of the deformation of the lower limbs in the current standard full-limb X-rays images presents significant challenges to radiologists and orthopedists. The precision of these measurements is deteriorated because of inexact positioning of the leg during image acquisition, problems with selecting reliable anatomical landmarks in projective X-ray images, and inevitable errors of manual measurements. The influence of the random errors resulting from the last two factors on the precision of the measurement can be reduced if an automated measurement method is used instead of a manual one. In the paper a framework for an automated measurement of various metric and angular quantities used in the description of the lower extremity deformation in full-limb frontal X-ray images is described. The results of automated measurements are compared with manual measurements. These results demonstrate that an automated method can be a valuable alternative to the manual measurements.

  10. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  11. An adaptive nonlocal means scheme for medical image denoising

    Science.gov (United States)

    Thaipanich, Tanaphol; Kuo, C.-C. Jay

    2010-03-01

    Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. In this work, we investigate an adaptive denoising scheme based on the nonlocal (NL)-means algorithm for medical imaging applications. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means (ANL-means) denoising scheme has three unique features. First, it employs the singular value decomposition (SVD) method and the K-means clustering (K-means) technique for robust classification of blocks in noisy images. Second, the local window is adaptively adjusted to match the local property of a block. Finally, a rotated block matching algorithm is adopted for better similarity matching. Experimental results from both additive white Gaussian noise (AWGN) and Rician noise are given to demonstrate the superior performance of the proposed ANL denoising technique over various image denoising benchmarks in term of both PSNR and perceptual quality comparison.

  12. MIRMAID: A Content Management System for Medical Image Analysis Research.

    Science.gov (United States)

    Korfiatis, Panagiotis D; Kline, Timothy L; Blezek, Daniel J; Langer, Steve G; Ryan, William J; Erickson, Bradley J

    2015-01-01

    Today, a typical clinical study can involve thousands of participants, with imaging data acquired over several time points across multiple institutions. The additional associated information (metadata) accompanying these data can cause data management to be a study-hindering bottleneck. Consistent data management is crucial for large-scale modern clinical imaging research studies. If the study is to be used for regulatory submissions, such systems must be able to meet regulatory compliance requirements for systems that manage clinical image trials, including protecting patient privacy. Our aim was to develop a system to address these needs by leveraging the capabilities of an open-source content management system (CMS) that has a highly configurable workflow; has a single interface that can store, manage, and retrieve imaging-based studies; and can handle the requirement for data auditing and project management. We developed a Web-accessible CMS for medical images called Medical Imaging Research Management and Associated Information Database (MIRMAID). From its inception, MIRMAID was developed to be highly flexible and to meet the needs of diverse studies. It fulfills the need for a complete system for medical imaging research management.

  13. Automated construction of arterial and venous trees in retinal images.

    Science.gov (United States)

    Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K

    2015-10-01

    While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input.

  14. Scanner-based image quality measurement system for automated analysis of EP output

    Science.gov (United States)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  15. Establishing advanced practice for medical imaging in New Zealand

    Energy Technology Data Exchange (ETDEWEB)

    Yielder, Jill, E-mail: j.yielder@auckland.ac.nz [University of Auckland, Auckland (New Zealand); Young, Adrienne; Park, Shelley; Coleman, Karen [University of Otago, Wellington (New Zealand); University of Auckland, Auckland (New Zealand)

    2014-02-15

    Introduction: This article presents the outcome and recommendations following the second stage of a role development project conducted on behalf of the New Zealand Institute of Medical Radiation Technology (NZIMRT). The study sought to support the development of profiles and criteria that may be used to formulate Advanced Scopes of Practice for the profession. It commenced in 2011, following on from initial research that occurred between 2005 and 2008 investigating role development and a possible career structure for medical radiation technologists (MRTs) in New Zealand (NZ). Methods: The study sought to support the development of profiles and criteria that could be used to develop Advanced Scopes of Practice for the profession through inviting 12 specialist medical imaging groups in NZ to participate in a survey. Results: Findings showed strong agreement on potential profiles and on generic criteria within them; however, there was less agreement on specific skills criteria within specialist areas. Conclusions: The authors recommend that one Advanced Scope of Practice be developed for Medical Imaging, with the establishment of generic and specialist criteria. Systems for approval of the overall criteria package for any individual Advanced Practitioner (AP) profile, audit and continuing professional development requirements need to be established by the Medical Radiation Technologists Board (MRTB) to meet the local needs of clinical departments. It is further recommended that the NZIMRT and MRTB promote and support the need for an AP pathway for medical imaging in NZ.

  16. Secure public cloud platform for medical images sharing.

    Science.gov (United States)

    Pan, Wei; Coatrieux, Gouenou; Bouslimi, Dalel; Prigent, Nicolas

    2015-01-01

    Cloud computing promises medical imaging services offering large storage and computing capabilities for limited costs. In this data outsourcing framework, one of the greatest issues to deal with is data security. To do so, we propose to secure a public cloud platform devoted to medical image sharing by defining and deploying a security policy so as to control various security mechanisms. This policy stands on a risk assessment we conducted so as to identify security objectives with a special interest for digital content protection. These objectives are addressed by means of different security mechanisms like access and usage control policy, partial-encryption and watermarking.

  17. Technical challenges for the construction of a medical image database

    Science.gov (United States)

    Ring, Francis J.; Ammer, Kurt; Wiecek, Boguslaw; Plassmann, Peter; Jones, Carl D.; Jung, Anna; Murawski, Piotr

    2005-10-01

    Infrared thermal imaging was first made available to medicine in the early 1960's. Despite a large number of research publications on the clinical application of the technique, the images have been largely qualitative. This is in part due to the imaging technology itself, and the problem of data exchange between different medical users, with different hardware. An Anglo Polish collaborative study was set up in 2001 to identify and resolve the sources of error and problems in medical thermal imaging. Standardisation of the patient preparation, imaging hardware, image capture and analysis has been studied and developed by the group. A network of specialist centres in Europe is planned to work to establish the first digital reference atlas of quantifiable images of the normal healthy human body. Further processing techniques can then be used to classify abnormalities found in disease states. The follow up of drug treatment has been successfully monitored in clinical trials with quantitative thermal imaging. The collection of normal reference images is in progress. This paper specifies the areas found to be the source of unwanted variables, and the protocols to overcome them.

  18. Software Agent with Reinforcement Learning Approach for Medical Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    Mahsa Chitsaz; Chaw Seng Woo

    2011-01-01

    Many image segmentation solutions are problem-based. Medical images have very similar grey level and texture among the interested objects. Therefore, medical image segmentation requires improvements although there have been researches done since the last few decades. We design a self-learning framework to extract several objects of interest simultaneously from Computed Tomography (CT) images. Our segmentation method has a learning phase that is based on reinforcement learning (RL) system. Each RL agent works on a particular sub-image of an input image to find a suitable value for each object in it. The RL system is define by state, action and reward. We defined some actions for each state in the sub-image. A reward function computes reward for each action of the RL agent. Finally, the valuable information, from discovering all states of the interest objects, will be stored in a Q-matrix and the final result can be applied in segmentation of similar images. The experimental results for cranial CT images demonstrated segmentation accuracy above 95%.

  19. Medical Image Classification Using Genetic Optimized Elman Network

    Directory of Open Access Journals (Sweden)

    T. Baranidharan

    2012-01-01

    Full Text Available Problem statement: Advancements in the internet and digital images have resulted in a huge database of images. Most of the current search engines found in the web depends only on images that can be retrieved using metadata, which generates a lot of unwanted results in the results got. Content-Based Image Retrieval (CBIR system is the utilization of computer vision techniques in the predicament of image retrieval. In other words, it is used for searching and retrieving of the right digital image among a huge database using query image. CBIR finds extensive applications in the field of medicine as it helps medical professionals in diagnosis and plan treatment. Approach: Various methods have been proposed for CBIR using the images low level features like histogram, color, texture and shape. Similarly various classification algorithms like Naive Bayes classifier, Support Vector Machine, Decision tree induction algorithms and Neural Network based classifiers have been studied extensively. In this study it is proposed to extract global features using Hilbert Transform (HT, select features based on the correlation of the extracted vectors with respect to the class label and propose a enhanced Elman Neural Network Genetic Algorithm Optimized Elman (GAOE Neural Network. Results and Conclusion: The proposed method for feature extraction and the classification algorithm was tested on a dataset consisting of 180 medical images. The classification accuracy of 92.22% was obtained in the proposed method.

  20. Directive Antenna for Ultrawideband Medical Imaging Systems

    Directory of Open Access Journals (Sweden)

    Amin M. Abbosh

    2008-01-01

    Full Text Available A compact and directive ultrawideband antenna is presented in this paper. The antenna is in the form of an antipodal tapered slot with resistive layers to improve its directivity and to reduce its backward radiation. The antenna operates over the frequency band from 3.1 GHz to more than 10.6 GHz. It features a directive radiation with a peak gain which is between 4 dBi and 11 dBi in the specified band. The time domain performance of the antenna shows negligible distortion. This makes it suitable for the imaging systems which require a very short pulse for transmission/reception. The effect of the multilayer human body on the performance of the antenna is also studied. The breast model is used for this purpose. It is shown that the antenna has more than 90% fidelity factor when it works in free space, whereas the fidelity factor decreases as the signal propagates inside the human body. However, even inside the human body, the fidelity factor is still larger than 70% revealing the possibility of using the proposed antenna in biomedical imaging systems.

  1. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    Science.gov (United States)

    Singh, Preetpal

    Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing

  2. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  3. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  4. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    OpenAIRE

    2014-01-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the e...

  5. Imaging requirements for medical applications of additive manufacturing.

    Science.gov (United States)

    Huotilainen, Eero; Paloheimo, Markku; Salmi, Mika; Paloheimo, Kaija-Stiina; Björkstrand, Roy; Tuomi, Jukka; Markkola, Antti; Mäkitie, Antti

    2014-02-01

    Additive manufacturing (AM), formerly known as rapid prototyping, is steadily shifting its focus from industrial prototyping to medical applications as AM processes, bioadaptive materials, and medical imaging technologies develop, and the benefits of the techniques gain wider knowledge among clinicians. This article gives an overview of the main requirements for medical imaging affected by needs of AM, as well as provides a brief literature review from existing clinical cases concentrating especially on the kind of radiology they required. As an example application, a pair of CT images of the facial skull base was turned into 3D models in order to illustrate the significance of suitable imaging parameters. Additionally, the model was printed into a preoperative medical model with a popular AM device. Successful clinical cases of AM are recognized to rely heavily on efficient collaboration between various disciplines - notably operating surgeons, radiologists, and engineers. The single main requirement separating tangible model creation from traditional imaging objectives such as diagnostics and preoperative planning is the increased need for anatomical accuracy in all three spatial dimensions, but depending on the application, other specific requirements may be present as well. This article essentially intends to narrow the potential communication gap between radiologists and engineers who work with projects involving AM by showcasing the overlap between the two disciplines.

  6. Infrared medical image visualization and anomalies analysis method

    Science.gov (United States)

    Gong, Jing; Chen, Zhong; Fan, Jing; Yan, Liang

    2015-12-01

    Infrared medical examination finds the diseases through scanning the overall human body temperature and obtaining the temperature anomalies of the corresponding parts with the infrared thermal equipment. In order to obtain the temperature anomalies and disease parts, Infrared Medical Image Visualization and Anomalies Analysis Method is proposed in this paper. Firstly, visualize the original data into a single channel gray image: secondly, turn the normalized gray image into a pseudo color image; thirdly, a method of background segmentation is taken to filter out background noise; fourthly, cluster those special pixels with the breadth-first search algorithm; lastly, mark the regions of the temperature anomalies or disease parts. The test is shown that it's an efficient and accurate way to intuitively analyze and diagnose body disease parts through the temperature anomalies.

  7. Novel medical imaging technologies for disease diagnosis and treatment

    Science.gov (United States)

    Olego, Diego

    2009-03-01

    New clinical approaches for disease diagnosis, treatment and monitoring will rely on the ability of simultaneously obtaining anatomical, functional and biological information. Medical imaging technologies in combination with targeted contrast agents play a key role in delivering with ever increasing temporal and spatial resolution structural and functional information about conditions and pathologies in cardiology, oncology and neurology fields among others. This presentation will review the clinical motivations and physics challenges in on-going developments of new medical imaging techniques and the associated contrast agents. Examples to be discussed are: *The enrichment of computer tomography with spectral sensitivity for the diagnosis of vulnerable sclerotic plaque. *Time of flight positron emission tomography for improved resolution in metabolic characterization of pathologies. *Magnetic particle imaging -a novel imaging modality based on in-vivo measurement of the local concentration of iron oxide nano-particles - for blood perfusion measurement with better sensitivity, spatial resolution and 3D real time acquisition. *Focused ultrasound for therapy delivery.

  8. Spatial Information Based Medical Image Registration using Mutual Information

    Directory of Open Access Journals (Sweden)

    Benzheng Wei

    2011-06-01

    Full Text Available Image registration is a valuable technique for medical diagnosis and treatment. Due to the inferiority of image registration using maximum mutual information, a new hybrid method of multimodality medical image registration based on mutual information of spatial information is proposed. The new measure that combines mutual information, spatial information and feature characteristics, is proposed. Edge points are used as features, obtained from a morphology gradient detector. Feature characteristics like location, edge strength and orientation are taken into account to compute a joint probability distribution of corresponding edge points in two images. Mutual information based on this function is minimized to find the best alignment parameters. Finally, the translation parameters are calculated by using a modified Particle Swarm Optimization (MPSO algorithm. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  9. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    Science.gov (United States)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  10. Medical image of the week: splenic infarction

    Directory of Open Access Journals (Sweden)

    Casey DJ

    2016-08-01

    Full Text Available No abstract available. Article truncated after 150 words. A 52-year-old Hispanic woman with a past medical history significant for Type 1 Diabetes Mellitus, hypertension, and rheumatoid arthritis presented with left upper quadrant pain for one day. Her review of systems was positive for bloating, severe epigastric and left upper quadrant tenderness that radiated to the back and left shoulder, nausea with non-bilious emesis, and diarrhea for one day prior to admission. Physical exam only revealed epigastric and left upper quadrant tenderness to light palpation without rebound or guarding. Abdominal computed tomography of the abdomen demonstrated a new acute or subacute splenic infarct with no clear evidence of an embolic source in the abdomen or pelvis (Figure 1. Echocardiogram with bubble study and contrast did not demonstrate valve abnormalities, cardiac mass, vegetation, valve or wall motion abnormalities and no evidence of patent foramen ovale. Splenic infarction should be suspected when patients present with sharp, acute left upper quadrant pain ...

  11. Medical image of the week: bronchopleural fistula

    Directory of Open Access Journals (Sweden)

    Desai H

    2016-09-01

    Full Text Available No abstract available. Article truncated at 150 words. A 58-year-old man with past medical history significant for chronic smoking and seizures was referred to the emergency room after a chest x-ray done by his primary care physician for evaluation of cough showed a hydropneumothorax. His symptoms included dry cough for past 2 months without fever, chills or other associated symptoms. He did not have any thoracic procedures performed and had no past history of recurrent infections. He was hemodynamically stable. Physical examination was only significant with decreased breath sounds on the right side of the chest. Thoracic CT with contrast was performed which showed complete collapse of the right lower lobe, near complete collapse of right middle lobe as well as an air-fluid level. There was a suspicion of a direct communication between bronchi and pleural space at the posterior lateral margin of the collapsed right lower lobe (Figure 1. The presence of bronchopleural fistula (BPF was confirmed ...

  12. Medical image of the week: arachnoid cyst

    Directory of Open Access Journals (Sweden)

    Erisman M

    2016-10-01

    Full Text Available No abstract available. Article truncated at 150 words. A 40 year-old woman with adult attention deficit hyperactive and bipolar 1 disorder presents with an altered mental status. Per her family, she had been non-verbal, with reduced oral intake, confusion and sedated for the past three days. Per her husband, she had episodes of diarrhea and abdominal discomfort. She was on multiple medications including ramelteon 8mg nightly, atomoxetine 40mg daily, hydroxyzine 25mg twice daily, bupropion 75mg twice daily and risperidone 2mg daily with recent addition of lithium ER 1200mg/daily started one month prior to presentation with unknown adherence. Upon arrival, vital signs were within normal limits. Physical exam revealed an overweight Caucasian woman with a significant coarse tremor visible at rest, restlessness and diaphoresis. Neurological examination was limited by patient hesitancy, however, it did not demonstrate focal deficits except for altered consciousness with Glasgow Coma Scale of 10. Notable laboratory findings were Na+ 134 mEq/L, K+ 3.2 mEq/L, and ...

  13. Medical image of the week: acute epiglottitis

    Directory of Open Access Journals (Sweden)

    Desai C

    2013-09-01

    Full Text Available No abstract available. Article truncated after 150 words. A 24 year old man without a significant past medical history presented with a 3 day history of sore throat, fever and less than 24 hour history of pain with breathing and swallowing secretions. He was intubated using fiberoptic nasopharyngoscopy in the emergency department due to stridor with a 6.0 mm endotracheal tube until successfully extubated five days later. Initially he was treated with broad spectrum antibiotics and methylprednisolone 40 mg intravenously every 12 hours. A CT scan of the neck did not show an epiglottic abscess. Acute epiglottitis in adults appears to have a rising incidence with an associated mortality of 7% that is related to Haemophilus influenzae type b, as well as other miscellaneous pathogens, mechanical injury or smoke inhalation. Risk factors associated with obstruction are drooling, rapid onset of symptoms, evidence of abscess formation and a history of diabetes mellitus. Epiglottic abscess is infrequent sequelae of acute …

  14. Medical image of the week: panloubular emphysema

    Directory of Open Access Journals (Sweden)

    Mathur A

    2015-08-01

    Full Text Available No abstract available. Article truncated after 150 words. A 60 year old female, non-smoker with a past medical history of chronic rhinosinusitis with nasal polyps presented with an eight year history of productive cough and dyspnea. Previous treatment with inhaled corticosteroids, courses of systemic corticosteroids and antibiotics provided modest improvement in her symptoms. Pulmonary function testing revealed a severe obstructive ventilatory defect without significant bronchodilator response and reduced diffusing capacity (DLCO. Chest x-ray surprisingly revealed lower lobe predominant emphysematous changes (Figure 1. Alpha-1-antitrypsin level was within normal range at 137 mg/dL. Panlobular emphysema represents permanent destruction of the entire acinus distal to the respiratory bronchioles and is more likely to affect the lower lobes compared to centrilobular emphysema (1. Panlobular emphysema is associated with alpha-1-antitrypsin deficiency, intravenous drug abuse specifically with methylphenidate and methadone, Swyer-James syndrome, and obliterative bronchiolitis. Whether this pattern is seen as part of normal senescence in non-smoking individuals remains controversial (2. Panlobular emphysema may ...

  15. Medical image of the week: lung entrapment

    Directory of Open Access Journals (Sweden)

    Natt B

    2016-07-01

    Full Text Available No abstract available. Article truncated at 150 words. A 74-year-old woman with a history of breast cancer 10 years ago treated with lumpectomy and radiation presented for evaluation of shortness of breath. She was diagnosed with left sided pleural effusion which was recurrent requiring multiple thoracenteses. There was increased pleural fludeoxyglucose (FDG uptake on PET-CT indicative of recurrent metastatic disease. She underwent a medical pleuroscopy since the pleural effusion analysis did not reveal malignant cells although the suspicion was high and tunneled pleural catheter placement as adjuvant chemotherapy was initiated. Figure 1 shows a pleurscopic view of the collapsed left lung and the effusion in the left hemi thorax. Figure 2 shows extensive involvement of the visceral pleura with metastatic disease preventing complete lung inflation. Figure 3 shows persistent pneumothorax-ex-vacuo despite pleural catheter placement confirming the diagnosis of entrapment. Incomplete lung inflation can be due to pleural disease, endobronchial lesions or chronic telecasts. Lung entrapment and trapped lung ...

  16. Medical image of the week: phytobezoar

    Directory of Open Access Journals (Sweden)

    Hansra A

    2016-01-01

    Full Text Available No abstract available. Article truncated after 150 words. A 10-year-old boy with a history of non-verbal autism presented to the hospital with symptoms of chronic malnourishment. He was recently started on a specific carbohydrate rich diet, as outlined by a popular mainstream nutrition book, with hopes of improvement in adverse behavior. Prior to the start of this new diet, he consistently demonstrated an increased craving for food and was described to have an insatiable appetite. Though he was relatively non-verbal at baseline, he intermittently voiced his hunger and associated abdominal pain. A supine abdominal radiograph obtained immediately after admission showed a moderate gastric distension with a significant stool burden. Follow-up radiographs of the abdomen were obtained after two days of medical attempts to clear out the gastrointestinal system. The supine frontal radiograph at this time showed a massively distended stomach with a mottled appearance and considerable mass effect on the transverse colon (Figure 1. The interpreting pediatric radiologist ...

  17. Sfm_georef: Automating image measurement of ground control points for SfM-based projects

    Science.gov (United States)

    James, Mike R.

    2016-04-01

    Deriving accurate DEM and orthomosaic image products from UAV surveys generally involves the use of multiple ground control points (GCPs). Here, we demonstrate the automated collection of GCP image measurements for SfM-MVS processed projects, using sfm_georef software (James & Robson, 2012; http://www.lancaster.ac.uk/staff/jamesm/software/sfm_georef.htm). Sfm_georef was originally written to provide geo-referencing procedures for SfM-MVS projects. It has now been upgraded with a 3-D patch-based matching routine suitable for automating GCP image measurement in both aerial and ground-based (oblique) projects, with the aim of reducing the time required for accurate geo-referencing. Sfm_georef is compatible with a range of SfM-MVS software and imports the relevant files that describe the image network, including camera models and tie points. 3-D survey measurements of ground control are then provided, either for natural features or artificial targets distributed over the project area. Automated GCP image measurement is manually initiated through identifying a GCP position in an image by mouse click; the GCP is then represented by a square planar patch in 3-D, textured from the image and oriented parallel to the local topographic surface (as defined by the 3-D positions of nearby tie points). Other images are then automatically examined by projecting the patch into the images (to account for differences in viewing geometry) and carrying out a sub-pixel normalised cross-correlation search in the local area. With two or more observations of a GCP, its 3-D co-ordinates are then derived by ray intersection. With the 3-D positions of three or more GCPs identified, an initial geo-referencing transform can be derived to relate the SfM-MVS co-ordinate system to that of the GCPs. Then, if GCPs are symmetric and identical, image texture from one representative GCP can be used to search automatically for all others throughout the image set. Finally, the GCP observations can be

  18. Automated classification of atherosclerotic plaque from magnetic resonance images using predictive models.

    Science.gov (United States)

    Anderson, Russell W; Stomberg, Christopher; Hahm, Charles W; Mani, Venkatesh; Samber, Daniel D; Itskovich, Vitalii V; Valera-Guallar, Laura; Fallon, John T; Nedanov, Pavel B; Huizenga, Joel; Fayad, Zahi A

    2007-01-01

    The information contained within multicontrast magnetic resonance images (MRI) promises to improve tissue classification accuracy, once appropriately analyzed. Predictive models capture relationships empirically, from known outcomes thereby combining pattern classification with experience. In this study, we examine the applicability of predictive modeling for atherosclerotic plaque component classification of multicontrast ex vivo MR images using stained, histopathological sections as ground truth. Ten multicontrast images from seven human coronary artery specimens were obtained on a 9.4 T imaging system using multicontrast-weighted fast spin-echo (T1-, proton density-, and T2-weighted) imaging with 39-mum isotropic voxel size. Following initial data transformations, predictive modeling focused on automating the identification of specimen's plaque, lipid, and media. The outputs of these three models were used to calculate statistics such as total plaque burden and the ratio of hard plaque (fibrous tissue) to lipid. Both logistic regression and an artificial neural network model (Relevant Input Processor Network-RIPNet) were used for predictive modeling. When compared against segmentation resulting from cluster analysis, the RIPNet models performed between 25 and 30% better in absolute terms. This translates to a 50% higher true positive rate over given levels of false positives. This work indicates that it is feasible to build an automated system of plaque detection using MRI and data mining.

  19. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.

    Science.gov (United States)

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  20. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets

    Directory of Open Access Journals (Sweden)

    Charita Bhikha

    2015-01-01

    Full Text Available An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  1. NeuriteTracer: a novel ImageJ plugin for automated quantification of neurite outgrowth.

    Science.gov (United States)

    Pool, Madeline; Thiemann, Joachim; Bar-Or, Amit; Fournier, Alyson E

    2008-02-15

    In vitro assays to measure neuronal growth are a fundamental tool used by many neurobiologists studying neuronal development and regeneration. The quantification of these assays requires accurate measurements of neurite length and neuronal cell numbers in neuronal cultures. Generally, these measurements are obtained through labor-intensive manual or semi-manual tracing of images. To automate these measurements, we have written NeuriteTracer, a neurite tracing plugin for the freely available image-processing program ImageJ. The plugin analyzes fluorescence microscopy images of neurites and nuclei of dissociated cultured neurons. Given user-defined thresholds, the plugin counts neuronal nuclei, and traces and measures neurite length. We find that NeuriteTracer accurately measures neurite outgrowth from cerebellar, DRG and hippocampal neurons. Values obtained by NeuriteTracer correlate strongly with those obtained by semi-manual tracing with NeuronJ and by using a sophisticated analysis package, MetaXpress. We reveal the utility of NeuriteTracer by demonstrating its ability to detect the neurite outgrowth promoting capacity of the rho kinase inhibitor Y-27632. Our plugin is an attractive alternative to existing tracing tools because it is fully automated and ready for use within a freely accessible imaging program.

  2. Medical image of the week: azygous lobe

    Directory of Open Access Journals (Sweden)

    Bhupinder Natt

    2013-12-01

    Full Text Available No abstract available. Article truncated at 150 words. A 59 year old man underwent chest radiography for evaluation of fever and cough. Imaging showed an accessory azygous lobe. An azygos lobe is found in 1% of anatomic specimens and forms when the right posterior cardinal vein, one of the precursors of the azygos vein, fails to migrate over the apex of the lung (1. Instead, the vein penetrates the lung carrying along pleural layers that entrap a portion of the right upper lobe. The vein appears to run within the lung, but is actually surrounded by both parietal and visceral pleura. The azygos fissure therefore consists of four layers of pleura, two parietal layers and two visceral layers, which wrap around the vein giving the appearance of a tadpole. Apart from an interesting incidental radiological finding, it is of limited clinical importance except that its presence should be recognized during thoracoscopic procedures. This patient was found to have …

  3. Medical Image Digitalization and Archiving Information System in Serbia

    Science.gov (United States)

    Sajfert, Vjekoslav; Milićević, Vladimir; Jevtić, Vesna; Jovanović, Višnja

    2007-04-01

    The paper gives a brief presentation of digital and archiving imaging system (PACS) with a survey of the main characteristics and development of the system worldwide as well as the possibilities and the area of its implementation in our conditions. We have given a proposition for digitalization and archiving of both the existing and future medical imaging in accordance with our possibilities for world standards implementation.

  4. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes.

  5. Medical image segmentation using level set and watershed transform

    Science.gov (United States)

    Zhu, Fuping; Tian, Jie

    2003-07-01

    One of the most popular level set algorithms is the so-called fast marching method. In this paper, a medical image segmentation algorithm is proposed based on the combination of fast marching method and watershed transformation. First, the original image is smoothed using nonlinear diffusion filter, then the smoothed image is over-segmented by the watershed algorithm. Last, the image is segmented automatically using the modified fast marching method. Due to introducing over-segmentation, the arrival time the seeded point to the boundary of region should be calculated. For other pixels inside the region of the seeded point, the arrival time is not calculated because of the region homogeneity. So the algorithm"s speed improves greatly. Moreover, the speed function is redefined based on the statistical similarity degree of the nearby regions. We also extend our algorithm to 3D circumstance and segment medical image series. Experiments show that the algorithm can fast and accurately obtain segmentation results of medical images.

  6. Digital Topology and Geometry in Medical Imaging: A Survey.

    Science.gov (United States)

    Saha, Punam K; Strand, Robin; Borgefors, Gunilla

    2015-09-01

    Digital topology and geometry refers to the use of topologic and geometric properties and features for images defined in digital grids. Such methods have been widely used in many medical imaging applications, including image segmentation, visualization, manipulation, interpolation, registration, surface-tracking, object representation, correction, quantitative morphometry etc. Digital topology and geometry play important roles in medical imaging research by enriching the scope of target outcomes and by adding strong theoretical foundations with enhanced stability, fidelity, and efficiency. This paper presents a comprehensive yet compact survey on results, principles, and insights of methods related to digital topology and geometry with strong emphasis on understanding their roles in various medical imaging applications. Specifically, this paper reviews methods related to distance analysis and path propagation, connectivity, surface-tracking, image segmentation, boundary and centerline detection, topology preservation and local topological properties, skeletonization, and object representation, correction, and quantitative morphometry. A common thread among the topics reviewed in this paper is that their theory and algorithms use the principle of digital path connectivity, path propagation, and neighborhood analysis.

  7. Adapting smartphones for low-cost optical medical imaging

    Science.gov (United States)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  8. Medical image of the week: scleroderma

    Directory of Open Access Journals (Sweden)

    Arteaga VA

    2015-04-01

    Full Text Available No abstract available. Article truncated at 150 words. A 56-year-old man presents with cough and dyspnea. Pertinent history is significant for scleroderma. A complete blood count and differential count were unremarkable. A chest radiograph was obtained (Figure 1. Based on overall imaging and clinical history, the chest x-ray findings are highly suggest interstitial lung disease likely related to scleroderma and a recommendation for high resolution chest CT was made. Progressive systemic sclerosis (scleroderma is an autoimmune connective tissue disease that affects 30-50 year old women more often than men and is characterized by the overproduction of collagen which can lead to fibrosis which includes the lungs, skin, and may also affect visceral organs (1. In the hands, vasculitis and Raynaud's phenomenon may lead to distal tapering (2. Although acro-osteolysis or distal tuft resorption can be seen in a wide variety of disorders, it may be present in up to 80% of patients with scleroderma. High-resolution chest CT is ...

  9. Medical Imaging for Understanding Sleep Regulation

    Science.gov (United States)

    Wong, Kenneth

    2011-10-01

    Sleep is essential for the health of the nervous system. Lack of sleep has a profound negative effect on cognitive ability and task performance. During sustained military operations, soldiers often suffer from decreased quality and quantity of sleep, increasing their susceptibility to neurological problems and limiting their ability to perform the challenging mental tasks that their missions require. In the civilian sector, inadequate sleep and overt sleep pathology are becoming more common, with many detrimental impacts. There is a strong need for new, in vivo studies of human brains during sleep, particularly the initial descent from wakefulness. Our research team is investigating sleep using a combination of magnetic resonance imaging (MRI), positron emission tomography (PET), and electroencephalography (EEG). High resolution MRI combined with PET enables localization of biochemical processes (e.g., metabolism) to anatomical structures. MRI methods can also be used to examine functional connectivity among brain regions. Neural networks are dynamically reordered during different sleep stages, reflecting the disconnect with the waking world and the essential yet unconscious brain activity that occurs during sleep.[4pt] In collaboration with Linda Larson-Prior, Washington University; Alpay Ozcan, Virginia Tech; Seong Mun, Virginia Tech; and Zang-Hee Cho, Gachon University.

  10. An automated method for comparing motion artifacts in cine four-dimensional computed tomography images.

    Science.gov (United States)

    Cui, Guoqiang; Jew, Brian; Hong, Julian C; Johnston, Eric W; Loo, Billy W; Maxim, Peter G

    2012-11-08

    The aim of this study is to develop an automated method to objectively compare motion artifacts in two four-dimensional computed tomography (4D CT) image sets, and identify the one that would appear to human observers with fewer or smaller artifacts. Our proposed method is based on the difference of the normalized correlation coefficients between edge slices at couch transitions, which we hypothesize may be a suitable metric to identify motion artifacts. We evaluated our method using ten pairs of 4D CT image sets that showed subtle differences in artifacts between images in a pair, which were identifiable by human observers. One set of 4D CT images was sorted using breathing traces in which our clinically implemented 4D CT sorting software miscalculated the respiratory phase, which expectedly led to artifacts in the images. The other set of images consisted of the same images; however, these were sorted using the same breathing traces but with corrected phases. Next we calculated the normalized correlation coefficients between edge slices at all couch transitions for all respiratory phases in both image sets to evaluate for motion artifacts. For nine image set pairs, our method identified the 4D CT sets sorted using the breathing traces with the corrected respiratory phase to result in images with fewer or smaller artifacts, whereas for one image pair, no difference was noted. Two observers independently assessed the accuracy of our method. Both observers identified 9 image sets that were sorted using the breathing traces with corrected respiratory phase as having fewer or smaller artifacts. In summary, using the 4D CT data of ten pairs of 4D CT image sets, we have demonstrated proof of principle that our method is able to replicate the results of two human observers in identifying the image set with fewer or smaller artifacts.

  11. A Review of Fully Automated Techniques for Brain Tumor Detection From MR Images

    Directory of Open Access Journals (Sweden)

    Anjum Hayat Gondal

    2013-02-01

    Full Text Available Radiologists use medical images to diagnose diseases precisely. However, identification of brain tumor from medical images is still a critical and complicated job for a radiologist. Brain tumor identification form magnetic resonance imaging (MRI consists of several stages. Segmentation is known to be an essential step in medical imaging classification and analysis. Performing the brain MR images segmentation manually is a difficult task as there are several challenges associated with it. Radiologist and medical experts spend plenty of time for manually segmenting brain MR images, and this is a non-repeatable task. In view of this, an automatic segmentation of brain MR images is needed to correctly segment White Matter (WM, Gray Matter (GM and Cerebrospinal Fluid (CSF tissues of brain in a shorter span of time. The accurate segmentation is crucial as otherwise the wrong identification of disease can lead to severe consequences. Taking into account the aforesaid challenges, this research is focused towards highlighting the strengths and limitations of the earlier proposed segmentation techniques discussed in the contemporary literature. Besides summarizing the literature, the paper also provides a critical evaluation of the surveyed literature which reveals new facets of research. However, articulating a new technique is beyond the scope of this paper.

  12. Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images

    Science.gov (United States)

    Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.

  13. Single-cell bacteria growth monitoring by automated DEP-facilitated image analysis.

    Science.gov (United States)

    Peitz, Ingmar; van Leeuwen, Rien

    2010-11-07

    Growth monitoring is the method of choice in many assays measuring the presence or properties of pathogens, e.g. in diagnostics and food quality. Established methods, relying on culturing large numbers of bacteria, are rather time-consuming, while in healthcare time often is crucial. Several new approaches have been published, mostly aiming at assaying growth or other properties of a small number of bacteria. However, no method so far readily achieves single-cell resolution with a convenient and easy to handle setup that offers the possibility for automation and high throughput. We demonstrate these benefits in this study by employing dielectrophoretic capturing of bacteria in microfluidic electrode structures, optical detection and automated bacteria identification and counting with image analysis algorithms. For a proof-of-principle experiment we chose an antibiotic susceptibility test with Escherichia coli and polymyxin B. Growth monitoring is demonstrated on single cells and the impact of the antibiotic on the growth rate is shown. The minimum inhibitory concentration as a standard diagnostic parameter is derived from a dose-response plot. This report is the basis for further integration of image analysis code into device control. Ultimately, an automated and parallelized setup may be created, using an optical microscanner and many of the electrode structures simultaneously. Sufficient data for a sound statistical evaluation and a confirmation of the initial findings can then be generated in a single experiment.

  14. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    Science.gov (United States)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  15. Synthetic Aperture Sequential Beamformation applied to medical imaging

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Hansen, Jens Munk; Jensen, Jørgen Arendt

    2012-01-01

    Synthetic Aperture Sequential Beamforming (SASB) is applied to medical ultrasound imaging using a multi element convex array transducer. The main motivation for SASB is to apply synthetic aperture techniques without the need for storing RF-data for a number of elements and hereby devise a system...

  16. Spatio-Temporal Encoding in Medical Ultrasound Imaging

    DEFF Research Database (Denmark)

    Gran, Fredrik

    2005-01-01

    In this dissertation two methods for spatio-temporal encoding in medical ultrasound imaging are investigated. The first technique is based on a frequency division approach. Here, the available spectrum of the transducer is divided into a set of narrow bands. A waveform is designed for each band...

  17. Techniques and software architectures for medical visualisation and image processing

    NARCIS (Netherlands)

    Botha, C.P.

    2005-01-01

    This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use

  18. Science means business: medical imaging shows colour of money

    CERN Multimedia

    Macfie, Rebecca

    2007-01-01

    Doctors have used x-ray machines for 100 years, but they remain an imprecise and limited diagnostic tool. But a team of Canterbury University researchers is aiming to revolutionise medical x-ray technology with high-precision colour imaging. (1,5 page)

  19. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Directory of Open Access Journals (Sweden)

    Muhammad Faisal Siddiqui

    Full Text Available A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT, principal component analysis (PCA, and least squares support vector machine (LS-SVM are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%. Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities

  20. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Science.gov (United States)

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the

  1. Communication software for physicians' workstations supporting medical imaging services

    Science.gov (United States)

    Orphanos, George; Kanellopoulos, Dimitris; Koubias, Stavros

    1993-09-01

    This paper describes a software communication architecture for medical imaging services. This work aims to provide to the physician the communication facilities to access and track a patient's record or to retrieve medical images from a remote database. The proposed architecture is comprised of a communication protocol and an application programming interface (API). The implemented protocol, namely the Telemedicine Network Services (TNS) protocol, has been designed in agreement with Open System Interconnection (OSI) upper layer protocols already standardized. Based on this concept an OSI-like interface has been developed capable of providing application services to the application developer, and thus facilitating the writing of medical application. TNS protocol has been implemented on top of TCP/IP communication protocols, by implementing OSI presentation and application services on top of the Transport Service Access Point (TSAP) which is provided by the socket abstraction on top of the TCP.

  2. Plane-Wave Imaging Challenge in Medical Ultrasound

    DEFF Research Database (Denmark)

    Liebgott, Herve; Molares, Alfonso Rodriguez; Cervenansky, F.

    2016-01-01

    Plane-Wave imaging enables very high frame rates, up to several thousand frames per second. Unfortunately the lack of transmit focusing leads to reduced image quality, both in terms of resolution and contrast. Recently, numerous beamforming techniques have been proposed to compensate for this eff...... for this effect, but comparing the different methods is difficult due to the lack of appropriate tools. PICMUS, the Plane-Wave Imaging Challenge in Medical Ultrasound aims to provide these tools. This paper describes the PICMUS challenge, its motivation, implementation, and metrics....

  3. Implementation of Synthetic Aperture Imaging in Medical Ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Kortbek, Jacob; Nikolov, Svetoslav

    2010-01-01

    The main advantage of medical ultrasound imaging is its real time capability, which makes it possible to visualize dynamic structures in the human body. Real time synthetic aperture imaging puts very high demands on the hardware, which currently cannot be met. A method for reducing the number...... of calculations and still retain the many advantages of SA imaging is described. It consists of a dual stage beamformer, where the first can be a simple fixed focus analog beamformer and the second an ordinary digital ultrasound beamformer. The performance and constrictions of the approach is described....

  4. Ultrasound introscopic image quantitative characteristics for medical diagnosis

    Science.gov (United States)

    Novoselets, Mikhail K.; Sarkisov, Sergey S.; Gridko, Alexander N.; Tcheban, Anatoliy K.

    1993-09-01

    The results on computer aided extraction of quantitative characteristics (QC) of ultrasound introscopic images for medical diagnosis are presented. Thyroid gland (TG) images of Chernobil Accident sufferers are considered. It is shown that TG diseases can be associated with some values of selected QCs of random echo distribution in the image. The possibility of these QCs usage for TG diseases recognition in accordance with calculated values is analyzed. The role of speckle noise elimination in the solution of the problem on TG diagnosis is considered too.

  5. 3D Medical Image Segmentation Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  6. Automated Line Tracking of lambda-DNA for Single-Molecule Imaging

    CERN Document Server

    Guan, Juan; Granick, Steve

    2011-01-01

    We describe a straightforward, automated line tracking method to visualize within optical resolution the contour of linear macromolecules as they rearrange shape as a function of time by Brownian diffusion and under external fields such as electrophoresis. Three sequential stages of analysis underpin this method: first, "feature finding" to discriminate signal from noise; second, "line tracking" to approximate those shapes as lines; third, "temporal consistency check" to discriminate reasonable from unreasonable fitted conformations in the time domain. The automated nature of this data analysis makes it straightforward to accumulate vast quantities of data while excluding the unreliable parts of it. We implement the analysis on fluorescence images of lambda-DNA molecules in agarose gel to demonstrate its capability to produce large datasets for subsequent statistical analysis.

  7. Estimation of urinary stone composition by automated processing of CT images

    CERN Document Server

    Chevreau, Grégoire; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre; 10.1007/s00240-009-0195-3

    2009-01-01

    The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminat...

  8. Design strategy and implementation of the medical diagnostic image support system at two large military medical centers

    Science.gov (United States)

    Smith, Donald V.; Smith, Stan M.; Sauls, F.; Cawthon, Michael A.; Telepak, Robert J.

    1992-07-01

    The Medical Diagnostic Imaging Support (MDIS) system contract for federal medical treatment facilities was awarded to Loral/Siemens in the Fall of 1991. This contract places ''filmless'' imaging in a variety of situations from small clients to large medical centers. The MDIS system approach is a ''turn-key'', performance based specification driven by clinical requirements.

  9. Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images

    Science.gov (United States)

    Jiang, Luan; Ling, Shan; Li, Qiang

    2016-03-01

    Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.

  10. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  11. Managing complex processing of medical image sequences by program supervision techniques

    Science.gov (United States)

    Crubezy, Monica; Aubry, Florent; Moisan, Sabine; Chameroy, Virginie; Thonnat, Monique; Di Paola, Robert

    1997-05-01

    Our objective is to offer clinicians wider access to evolving medical image processing (MIP) techniques, crucial to improve assessment and quantification of physiological processes, but difficult to handle for non-specialists in MIP. Based on artificial intelligence techniques, our approach consists in the development of a knowledge-based program supervision system, automating the management of MIP libraries. It comprises a library of programs, a knowledge base capturing the expertise about programs and data and a supervision engine. It selects, organizes and executes the appropriate MIP programs given a goal to achieve and a data set, with dynamic feedback based on the results obtained. It also advises users in the development of new procedures chaining MIP programs.. We have experimented the approach for an application of factor analysis of medical image sequences as a means of predicting the response of osteosarcoma to chemotherapy, with both MRI and NM dynamic image sequences. As a result our program supervision system frees clinical end-users from performing tasks outside their competence, permitting them to concentrate on clinical issues. Therefore our approach enables a better exploitation of possibilities offered by MIP and higher quality results, both in terms of robustness and reliability.

  12. Open source tools for standardized privacy protection of medical images

    Science.gov (United States)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  13. Multimodal Medical Image Fusion by Adaptive Manifold Filter

    Directory of Open Access Journals (Sweden)

    Peng Geng

    2015-01-01

    Full Text Available Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  14. A review of medical image watermarking requirements for teleradiology.

    Science.gov (United States)

    Nyeem, Hussain; Boles, Wageeh; Boyd, Colin

    2013-04-01

    Teleradiology allows medical images to be transmitted over electronic networks for clinical interpretation and for improved healthcare access, delivery, and standards. Although such remote transmission of the images is raising various new and complex legal and ethical issues, including image retention and fraud, privacy, malpractice liability, etc., considerations of the security measures used in teleradiology remain unchanged. Addressing this problem naturally warrants investigations on the security measures for their relative functional limitations and for the scope of considering them further. In this paper, starting with various security and privacy standards, the security requirements of medical images as well as expected threats in teleradiology are reviewed. This will make it possible to determine the limitations of the conventional measures used against the expected threats. Furthermore, we thoroughly study the utilization of digital watermarking for teleradiology. Following the key attributes and roles of various watermarking parameters, justification for watermarking over conventional security measures is made in terms of their various objectives, properties, and requirements. We also outline the main objectives of medical image watermarking for teleradiology and provide recommendations on suitable watermarking techniques and their characterization. Finally, concluding remarks and directions for future research are presented.

  15. Gadgetron: an open source framework for medical image reconstruction.

    Science.gov (United States)

    Hansen, Michael Schacht; Sørensen, Thomas Sangild

    2013-06-01

    This work presents a new open source framework for medical image reconstruction called the "Gadgetron." The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or "Gadgets" from raw data to reconstructed images. The data processing pipeline is configured dynamically at run-time based on an extensible markup language configuration description. The framework promotes reuse and sharing of reconstruction modules and new Gadgets can be added to the Gadgetron framework through a plugin-like architecture without recompiling the basic framework infrastructure. Gadgets are typically implemented in C/C++, but the framework includes wrapper Gadgets that allow the user to implement new modules in the Python scripting language for rapid prototyping. In addition to the streaming framework infrastructure, the Gadgetron comes with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its application to Cartesian and non-Cartesian parallel magnetic resonance imaging.

  16. An eSnake model for medical image segmentation

    Institute of Scientific and Technical Information of China (English)

    L(U) Hongyu; YUAN Kehong; BAO Shanglian; ZU Donglin; DUAN Chaijie

    2005-01-01

    A novel scheme of external force for detecting the object boundary of medical image based on Snakes (active contours)is introduced in the paper. In our new method, an electrostatic field on a template plane above the original image plane is designed to form the map of the external force. Compared with the method of Gradient Vector Flow (GVF), our approach has clear physical meanings. It has stronger ability to conform to boundary concavities, is simple to implement, and reliable for shape segmenting. Additionally, our method has larger capture range for the external force and is useful for medical image preprocessing in various applications. Finally, by adding the balloon force to the electrostatic field model, our Snake is able to represent long tube-like shapes or shapes with significant protrusions or bifurcations, and it has the specialty to prevent Snake leaking from large gaps on image edge by using a two-stage segmentation technique introduced in this paper. The test of our models proves that our methods are robust, precise in medical image segmentation.

  17. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  18. Defining the medical imaging requirements for a rural health center

    CERN Document Server

    2017-01-01

    This book establishes the criteria for the type of medical imaging services that should be made available to rural health centers, providing professional rural hospital managers with information that makes their work more effective and efficient. It also offers valuable insights into government, non-governmental and religious organizations involved in the planning, establishment and operation of medical facilities in rural areas. Rural health centers are established to prevent patients from being forced to travel to distant urban medical facilities. To manage patients properly, rural health centers should be part of regional and more complete systems of medical health care installations in the country on the basis of a referral and counter-referral program, and thus, they should have the infrastructure needed to transport patients to urban hospitals when they need more complex health care. The coordination of all the activities is only possible if rural health centers are led by strong and dedicated managers....

  19. Study on scalable coding algorithm for medical image.

    Science.gov (United States)

    Hongxin, Chen; Zhengguang, Liu; Hongwei, Zhang

    2005-01-01

    According to the characteristics of medical image and wavelet transform, a scalable coding algorithm is presented, which can be used in image transmission by network. Wavelet transform makes up for the weakness of DCT transform and it is similar to the human visual system. The second generation of wavelet transform, the lifting scheme, can be completed by integer form, which is divided into several steps, and they can be realized by calculation form integer to integer. Lifting scheme can simplify the computing process and increase transform precision. According to the property of wavelet sub-bands, wavelet coefficients are organized on the basis of the sequence of their importance, so code stream is formed progressively and it is scalable in resolution. Experimental results show that the algorithm can be used effectively in medical image compression and suitable to long-distance browse.

  20. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  1. A Routing Mechanism for Cloud Outsourcing of Medical Imaging Repositories.

    Science.gov (United States)

    Godinho, Tiago Marques; Viana-Ferreira, Carlos; Bastião Silva, Luís A; Costa, Carlos

    2016-01-01

    Web-based technologies have been increasingly used in picture archive and communication systems (PACS), in services related to storage, distribution, and visualization of medical images. Nowadays, many healthcare institutions are outsourcing their repositories to the cloud. However, managing communications between multiple geo-distributed locations is still challenging due to the complexity of dealing with huge volumes of data and bandwidth requirements. Moreover, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. In order to improve the performance of distributed medical imaging networks, a smart routing mechanism was developed. This includes an innovative cache system based on splitting and dynamic management of digital imaging and communications in medicine objects. The proposed solution was successfully deployed in a regional PACS archive. The results obtained proved that it is better than conventional approaches, as it reduces remote access latency and also the required cache storage space.

  2. Natural user interfaces in medical image analysis cognitive analysis of brain and carotid artery images

    CERN Document Server

    Ogiela, Marek R

    2014-01-01

    This unique text/reference highlights a selection of practical applications of advanced image analysis methods for medical images. The book covers the complete methodology for processing, analysing and interpreting diagnostic results of sample CT images. The text also presents significant problems related to new approaches and paradigms in image understanding and semantic image analysis. To further engage the reader, example source code is provided for the implemented algorithms in the described solutions. Features: describes the most important methods and algorithms used for image analysis; e

  3. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  4. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Ani eEloyan

    2012-08-01

    Full Text Available Successful automated diagnoses of attention deficit hyperactive disorder (ADHD using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions, CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  5. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    Science.gov (United States)

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  6. An active learning approach to the physics of medical imaging

    DEFF Research Database (Denmark)

    Wilhjelm, Jens E.; Pihl, Michael Johannes; Lonsdale, Markus Nowak;

    2008-01-01

    This paper describes an experimentally oriented medical imaging course where the students record, process and analyse 3D data of an unknown piece of formalin fixed porcine tissue hidden in agar in order to estimate the tissue types present in a selected 2D slice. The recorded planar X-ray, CT, MRI......, ultrasound and SPECT images show the tissue in very different ways. The students can only estimate the tissue type by studying the physical principles of the imaging modalities. The true answer is later revealed by anatomical photographs obtained from physical slicing. The paper describes the phantoms...... and methods used in the course. Sample images recorded with the different imaging modalities are provided. Challenges faced by the students are outlined. Results of the course show high increase in competencies as judged from graded reports, low course drop-out rate, high pass-rate at the exam, high student...

  7. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    Science.gov (United States)

    Pierce Ii, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-08

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.

  8. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    Science.gov (United States)

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  9. A hybrid method based on fuzzy clustering and local region-based level set for segmentation of inhomogeneous medical images.

    Science.gov (United States)

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid; Soltanian-Zadeh, Hamid

    2014-08-01

    medical images are more affected by intensity inhomogeneity rather than noise and outliers. This has a great impact on the efficiency of region-based image segmentation methods, because they rely on homogeneity of intensities in the regions of interest. Meanwhile, initialization and configuration of controlling parameters affect the performance of level set segmentation. To address these problems, this paper proposes a new hybrid method that integrates a local region-based level set method with a variation of fuzzy clustering. Specifically it takes an information fusion approach based on a coarse-to-fine framework that seamlessly fuses local spatial information and gray level information with the information of the local region-based level set method. Also, the controlling parameters of level set are directly computed from fuzzy clustering result. This approach has valuable benefits such as automation, no need to prior knowledge about the region of interest (ROI), robustness on intensity inhomogeneity, automatic adjustment of controlling parameters, insensitivity to initialization, and satisfactory accuracy. So, the contribution of this paper is to provide these advantages together which have not been proposed yet for inhomogeneous medical images. Proposed method was tested on several medical images from different modalities for performance evaluation. Experimental results approve its effectiveness in segmenting medical images in comparison with similar methods.

  10. Automated choroid segmentation based on gradual intensity distance in HD-OCT images.

    Science.gov (United States)

    Chen, Qiang; Fan, Wen; Niu, Sijie; Shi, Jiajia; Shen, Honglie; Yuan, Songtao

    2015-04-06

    The choroid is an important structure of the eye and plays a vital role in the pathology of retinal diseases. This paper presents an automated choroid segmentation method for high-definition optical coherence tomography (HD-OCT) images, including Bruch's membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. An improved retinal nerve fiber layer (RNFL) complex removal algorithm is presented to segment BM by considering the structure characteristics of retinal layers. By analyzing the characteristics of CSI boundaries, we present a novel algorithm to generate a gradual intensity distance image. Then an improved 2-D graph search method with curve smooth constraints is used to obtain the CSI segmentation. Experimental results with 212 HD-OCT images from 110 eyes in 66 patients demonstrate that the proposed method can achieve high segmentation accuracy. The mean choroid thickness difference and overlap ratio between our proposed method and outlines drawn by experts was 6.72µm and 85.04%, respectively.

  11. Technique for Automated Recognition of Sunspots on Full-Disk Solar Images

    Directory of Open Access Journals (Sweden)

    Zharkov S

    2005-01-01

    Full Text Available A new robust technique is presented for automated identification of sunspots on full-disk white-light (WL solar images obtained from SOHO/MDI instrument and Ca II K1 line images from the Meudon Observatory. Edge-detection methods are applied to find sunspot candidates followed by local thresholding using statistical properties of the region around sunspots. Possible initial oversegmentation of images is remedied with a median filter. The features are smoothed by using morphological closing operations and filled by applying watershed, followed by dilation operator to define regions of interest containing sunspots. A number of physical and geometrical parameters of detected sunspot features are extracted and stored in a relational database along with umbra-penumbra information in the form of pixel run-length data within a bounding rectangle. The detection results reveal very good agreement with the manual synoptic maps and a very high correlation with those produced manually by NOAA Observatory, USA.

  12. Automated system for acquisition and image processing for the control and monitoring boned nopal

    Science.gov (United States)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  13. Comprehensive computerized medical imaging: interim hypothetical economic evaluation

    Science.gov (United States)

    Warburton, Rebecca N.; Fisher, Paul D.; Nosil, Josip

    1990-08-01

    The 422-bed Victoria General Hospital (VGH) and Siemens Electric Limited have since 1983 been piloting the implementation of comprehensive computerized medical imaging, including digital acquisition of diagnostic images, in British Columbia. Although full PACS is not yet in place at VGH, experience to date habeen used to project annual cost figures (including capital replacement) for a fully-computerized department. The resulting economic evaluation has been labelled hypothetical to emphasize that some key cost components were estimated rather than observed; this paper presents updated cost figures based on recent revisions to proposed departmental equipment configuration which raised the cost of conventional imaging equipment by 0.3 million* and lowered the cost of computerized imaging equipment by 0.8 million. Compared with conventional diagnostic imaging, computerized imaging appears to raise overall annual costs at VGH by nearly 0.7 million, or 11.6%; this is more favourable than the previous results, which indicated extra annual costs of 1 million (16.9%). Sensitivity analysis still indicates that all reasonable changes in the underlying assumptions result in higher costs for computerized imaging than for conventional imaging. Computerized imaging offers lower radiation exposure to patients, shorter waiting times, and other potential advantages, but as yet the price of obtaining these benefits remains substantial.

  14. Predicting Semantic Descriptions from Medical Images with Convolutional Neural Networks.

    Science.gov (United States)

    Schlegl, Thomas; Waldstein, Sebastian M; Vogl, Wolf-Dieter; Schmidt-Erfurth, Ursula; Langs, Georg

    2015-01-01

    Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry.

  15. Automated characterization of blood vessels as arteries and veins in retinal images.

    Science.gov (United States)

    Mirsharif, Qazaleh; Tajeripour, Farshad; Pourreza, Hamidreza

    2013-01-01

    In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.

  16. Difference Tracker: ImageJ plugins for fully automated analysis of multiple axonal transport parameters.

    Science.gov (United States)

    Andrews, Simon; Gilley, Jonathan; Coleman, Michael P

    2010-11-30

    Studies of axonal transport are critical, not only to understand its normal regulation, but also to determine the roles of transport impairment in disease. Exciting new resources have recently become available allowing live imaging of axonal transport in physiologically relevant settings, such as mammalian nerves. Thus the effects of disease, ageing and therapies can now be assessed directly in nervous system tissue. However, these imaging studies present new challenges. Manual or semi-automated analysis of the range of transport parameters required for a suitably complete evaluation is very time-consuming and can be subjective due to the complexity of the particle movements in axons in ex vivo explants or in vivo. We have developed Difference Tracker, a program combining two new plugins for the ImageJ image-analysis freeware, to provide fast, fully automated and objective analysis of a number of relevant measures of trafficking of fluorescently labeled particles so that axonal transport in different situations can be easily compared. We confirm that Difference Tracker can accurately track moving particles in highly simplified, artificial simulations. It can also identify and track multiple motile fluorescently labeled mitochondria simultaneously in time-lapse image stacks from live imaging of tibial nerve axons, reporting values for a number of parameters that are comparable to those obtained through manual analysis of the same axons. Difference Tracker therefore represents a useful free resource for the comparative analysis of axonal transport under different conditions, and could potentially be used and developed further in many other studies requiring quantification of particle movements.

  17. The use of the Kalman filter in the automated segmentation of EIT lung images.

    Science.gov (United States)

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  18. Automated generation of curved planar reformations from MR images of the spine

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia); Ourselin, Sebastien [CSIRO ICT Centre, Autonomous Systems Laboratory, BioMedIA Lab, Locked Bag 17, North Ryde, NSW 2113 (Australia); Gomes, Lavier [Department of Radiology, Westmead Hospital, University of Sydney, Hawkesbury Road, Westmead NSW 2145 (Australia); Likar, Bostjan [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia); Pernus, Franjo [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2007-05-21

    A novel method for automated curved planar reformation (CPR) of magnetic resonance (MR) images of the spine is presented. The CPR images, generated by a transformation from image-based to spine-based coordinate system, follow the structural shape of the spine and allow the whole course of the curved anatomy to be viewed in individual cross-sections. The three-dimensional (3D) spine curve and the axial vertebral rotation, which determine the transformation, are described by polynomial functions. The 3D spine curve passes through the centres of vertebral bodies, while the axial vertebral rotation determines the rotation of vertebrae around the axis of the spinal column. The optimal polynomial parameters are obtained by a robust refinement of the initial estimates of the centres of vertebral bodies and axial vertebral rotation. The optimization framework is based on the automatic image analysis of MR spine images that exploits some basic anatomical properties of the spine. The method was evaluated on 21 MR images from 12 patients and the results provided a good description of spine anatomy, with mean errors of 2.5 mm and 1.7{sup 0} for the position of the 3D spine curve and axial rotation of vertebrae, respectively. The generated CPR images are independent of the position of the patient in the scanner while comprising both anatomical and geometrical properties of the spine.

  19. Automated generation of curved planar reformations from MR images of the spine

    Science.gov (United States)

    Vrtovec, Tomaz; Ourselin, Sébastien; Gomes, Lavier; Likar, Boštjan; Pernuš, Franjo

    2007-05-01

    A novel method for automated curved planar reformation (CPR) of magnetic resonance (MR) images of the spine is presented. The CPR images, generated by a transformation from image-based to spine-based coordinate system, follow the structural shape of the spine and allow the whole course of the curved anatomy to be viewed in individual cross-sections. The three-dimensional (3D) spine curve and the axial vertebral rotation, which determine the transformation, are described by polynomial functions. The 3D spine curve passes through the centres of vertebral bodies, while the axial vertebral rotation determines the rotation of vertebrae around the axis of the spinal column. The optimal polynomial parameters are obtained by a robust refinement of the initial estimates of the centres of vertebral bodies and axial vertebral rotation. The optimization framework is based on the automatic image analysis of MR spine images that exploits some basic anatomical properties of the spine. The method was evaluated on 21 MR images from 12 patients and the results provided a good description of spine anatomy, with mean errors of 2.5 mm and 1.7° for the position of the 3D spine curve and axial rotation of vertebrae, respectively. The generated CPR images are independent of the position of the patient in the scanner while comprising both anatomical and geometrical properties of the spine.

  20. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  1. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    Science.gov (United States)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  2. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    Science.gov (United States)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  3. Automated segmentation of oral mucosa from wide-field OCT images (Conference Presentation)

    Science.gov (United States)

    Goldan, Ryan N.; Lee, Anthony M. D.; Cahill, Lucas; Liu, Kelly; MacAulay, Calum; Poh, Catherine F.; Lane, Pierre

    2016-03-01

    Optical Coherence Tomography (OCT) can discriminate morphological tissue features important for oral cancer detection such as the presence or absence of basement membrane and epithelial thickness. We previously reported an OCT system employing a rotary-pullback catheter capable of in vivo, rapid, wide-field (up to 90 x 2.5mm2) imaging in the oral cavity. Due to the size and complexity of these OCT data sets, rapid automated image processing software that immediately displays important tissue features is required to facilitate prompt bed-side clinical decisions. We present an automated segmentation algorithm capable of detecting the epithelial surface and basement membrane in 3D OCT images of the oral cavity. The algorithm was trained using volumetric OCT data acquired in vivo from a variety of tissue types and histology-confirmed pathologies spanning normal through cancer (8 sites, 21 patients). The algorithm was validated using a second dataset of similar size and tissue diversity. We demonstrate application of the algorithm to an entire OCT volume to map epithelial thickness, and detection of the basement membrane, over the tissue surface. These maps may be clinically useful for delineating pre-surgical tumor margins, or for biopsy site guidance.

  4. Automated detection of regions of interest for tissue microarray experiments: an image texture analysis

    Directory of Open Access Journals (Sweden)

    Tözeren Aydin

    2007-03-01

    Full Text Available Abstract Background Recent research with tissue microarrays led to a rapid progress toward quantifying the expressions of large sets of biomarkers in normal and diseased tissue. However, standard procedures for sampling tissue for molecular profiling have not yet been established. Methods This study presents a high throughput analysis of texture heterogeneity on breast tissue images for the purpose of identifying regions of interest in the tissue for molecular profiling via tissue microarray technology. Image texture of breast histology slides was described in terms of three parameters: the percentage of area occupied in an image block by chromatin (B, percentage occupied by stroma-like regions (P, and a statistical heterogeneity index H commonly used in image analysis. Texture parameters were defined and computed for each of the thousands of image blocks in our dataset using both the gray scale and color segmentation. The image blocks were then classified into three categories using the texture feature parameters in a novel statistical learning algorithm. These categories are as follows: image blocks specific to normal breast tissue, blocks specific to cancerous tissue, and those image blocks that are non-specific to normal and disease states. Results Gray scale and color segmentation techniques led to identification of same regions in histology slides as cancer-specific. Moreover the image blocks identified as cancer-specific belonged to those cell crowded regions in whole section image slides that were marked by two pathologists as regions of interest for further histological studies. Conclusion These results indicate the high efficiency of our automated method for identifying pathologic regions of interest on histology slides. Automation of critical region identification will help minimize the inter-rater variability among different raters (pathologists as hundreds of tumors that are used to develop an array have typically been evaluated

  5. Medical Imaging Field of Magnetic Resonance Imaging: Identification of Specialties within the Field

    Science.gov (United States)

    Grey, Michael L.

    2009-01-01

    This study was conducted to determine if specialty areas are emerging in the magnetic resonance imaging (MRI) profession due to advancements made in the medical sciences, imaging technology, and clinical applications used in MRI that would require new developments in education/training programs and national registry examinations. In this…

  6. Implementation of a pharmacy automation system (robotics) to ensure medication safety at Norwalk hospital.

    Science.gov (United States)

    Bepko, Robert J; Moore, John R; Coleman, John R

    2009-01-01

    This article reports an intervention to improve the quality and safety of hospital patient care by introducing the use of pharmacy robotics into the medication distribution process. Medication safety is vitally important. The integration of pharmacy robotics with computerized practitioner order entry and bedside medication bar coding produces a significant reduction in medication errors. The creation of a safe medication-from initial ordering to bedside administration-provides enormous benefits to patients, to health care providers, and to the organization as well.

  7. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  8. Hybrid Segmentation of Vessels and Automated Flow Measures in In-Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Martins, Bo; Hansen, Kristoffer Lindskov

    2016-01-01

    Vector Flow Imaging (VFI) has received an increasing attention in the scientific field of ultrasound, as it enables angle independent visualization of blood flow. VFI can be used in volume flow estimation, but a vessel segmentation is needed to make it fully automatic. A novel vessel segmentation...... procedure is crucial for wall-to-wall visualization, automation of adjustments, and quantification of flow in state-of-the-art ultrasound scanners. We propose and discuss a method for accurate vessel segmentation that fuses VFI data and B-mode for robustly detecting and delineating vessels. The proposed...

  9. Automated Image Segmentation And Characterization Technique For Effective Isolation And Representation Of Human Face

    Directory of Open Access Journals (Sweden)

    Rajesh Reddy N

    2014-01-01

    Full Text Available In areas such as defense and forensics, it is necessary to identify the face of the criminals from the already available database. Automated face recognition system involves face isolation, feature extraction and classification technique. Challenges in face recognition system are isolating the face effectively as it may be affected by illumination, posture and variation in skin color. Hence it is necessary to develop an effective algorithm that isolates face from the image. In this paper, advanced face isolation technique and feature extraction technique has been proposed.

  10. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  11. A Volume Rendering Algorithm for Sequential 2D Medical Images

    Institute of Scientific and Technical Information of China (English)

    吕忆松; 陈亚珠

    2002-01-01

    Volume rendering of 3D data sets composed of sequential 2D medical images has become an important branch in image processing and computer graphics.To help physicians fully understand deep-seated human organs and focuses(e.g.a tumour)as 3D structures.in this paper,we present a modified volume rendering algorithm to render volumetric data,Using this method.the projection images of structures of interest from different viewing directions can be obtained satisfactorily.By rotating the light source and the observer eyepoint,this method avoids rotates the whole volumetric data in main memory and thus reduces computational complexity and rendering time.Experiments on CT images suggest that the proposed method is useful and efficient for rendering 3D data sets.

  12. Medical image segmentation based on cellular neural network

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The application of cellular neural network (CNN) has made great progress in image processing. When the selected objects extraction (SOE) CNN is applied to gray scale images, its effects depend on the choice of initial points. In this paper, we take medical images as an example to analyze this limitation. Then an improved algorithm is proposed in which we can segment any gray level objects regardless of the limitation stated above. We also use the gradient information and contour detection CNN to determine the contour and ensure the veracity of segmentation effectively. Finally, we apply the improved algorithm to tumor segmentation of the human brain MR image. The experimental results show that the algorithm is practical and effective.

  13. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas

    Science.gov (United States)

    Alexander, Nathan S.; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-01-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE. PMID:26309765

  14. Automating the Analysis of Spatial Grids A Practical Guide to Data Mining Geospatial Images for Human & Environmental Applications

    CERN Document Server

    Lakshmanan, Valliappa

    2012-01-01

    The ability to create automated algorithms to process gridded spatial data is increasingly important as remotely sensed datasets increase in volume and frequency. Whether in business, social science, ecology, meteorology or urban planning, the ability to create automated applications to analyze and detect patterns in geospatial data is increasingly important. This book provides students with a foundation in topics of digital image processing and data mining as applied to geospatial datasets. The aim is for readers to be able to devise and implement automated techniques to extract information from spatial grids such as radar, satellite or high-resolution survey imagery.

  15. AI (artificial intelligence in histopathology--from image analysis to automated diagnosis.

    Directory of Open Access Journals (Sweden)

    Aleksandar Bogovac

    2010-02-01

    Full Text Available The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures and pixel based (texture measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and

  16. AI (artificial intelligence) in histopathology--from image analysis to automated diagnosis.

    Science.gov (United States)

    Kayser, Klaus; Görtler, Jürgen; Bogovac, Milica; Bogovac, Aleksandar; Goldmann, Torsten; Vollmer, Ekkehard; Kayser, Gian

    2009-01-01

    The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all) fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures) and pixel based (texture) measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and continuous

  17. Application of a medical image processing system in liver transplantation

    Institute of Scientific and Technical Information of China (English)

    Chi-Hua Fang; Xiao-Feng Li; Zhou Li; Ying-Fang Fan; Chao-Min Lu; Yan-Peng Huang; Feng-Ping Peng

    2010-01-01

    BACKGROUND: At present, imaging is used not only to show the form of images, but also to make three-dimensional (3D) reconstructions and visual simulations based on original data to guide clinical surgery. This study aimed to assess the use of a medical image-processing system in liver transplantation surgery. METHODS: The data of abdominal 64-slice spiral CT scan were collected from 200 healthy volunteers and 37 liver cancer patients in terms of hepatic arterial phase, portal phase, and hepatic venous phase. A 3D model of abdominal blood vessels including the abdominal aorta system, portal vein system, and inferior vena cava system was reconstructed by an abdominal image processing system to identify vascular variations. Then, a 3D model of the liver was reconstructed in terms of hepatic segmentation and liver volume was calculated. The FreeForm modeling system with a PHANTOM force feedback device was used to simulate the real liver transplantation environment, in which the total process of liver transplantation was completed. RESULTS: The reconstructed model of the abdominal blood vessels and the liver was clearly demonstrated to be three-dimensionally consistent with the anatomy of the liver, in which the variations of abdominal blood vessels were identiifed and liver segmentation was performed digitally. In the model, liver transplantation was simulated subsequently, and different modus operandi were selected successfully. CONCLUSION: The digitized medical image processing system may be valuable for liver transplantation.

  18. Information preserved guided scan pixel difference coding for medical images

    CERN Document Server

    Takaya, K; Yuan, L; Takaya, Kunio; Yuan, Li

    2001-01-01

    This paper analyzes the information content of medical images, with 3-D MRI images as an example, in terms of information entropy. The results of the analysis justify the use of Pixel Difference Coding for preserving all information contained in the original pictures, lossless coding in other words. The experimental results also indicate that the compression ratio CR=2:1 can be achieved under the lossless constraints. A pratical implementation of Pixel Difference Coding which allows interactive retrieval of local ROI (Region of Interest), while maintaining the near low bound information entropy, is discussed.

  19. A New Application of MSPIHT for Medical Imaging

    Directory of Open Access Journals (Sweden)

    Athmane ZITOUNI

    2012-01-01

    Full Text Available In this paper, we propose a new application for medical imaging to image compression based on the principle of Set Partitioning In Hierarchical Tree algorithm (SPIHT. Our approach called , the modified SPIHT (MSPIHT, distributes entropy differently than SPIHT and also optimizes the coding. This approach can produce results that are a significant improvement on the Peak Signal-to-Noise Ratio (PSNR and compression ratio obtained by SPIHT algorithm, without affecting the computing time. These results are also comparable with those obtained using the Set Partitioning In Hierarchical Tree (SPIHT and Joint Photographic Experts Group 2000 (JPG2 algorithms.

  20. Medical image of the week: alpha intrusion into REM sleep

    OpenAIRE

    Shetty S; Le T

    2015-01-01

    A 45-year-old woman with a past medical history of hypertension and chronic headaches was referred to the sleep laboratory for high clinical suspicion for sleep apnea based on a history of snoring, witnessed apnea and excessive daytime sleepiness. An overnight sleep study was performed. Images during N3 Sleep and REM sleep are shown (Figures 1 and 2). Alpha intrusion in delta sleep is seen in patients with fibromyalgia, depression, chronic fatigue syndrome, anxiety disorder, and primary sleep...