WorldWideScience

Sample records for automated image analysis1woa

  1. Automated ship image acquisition

    Science.gov (United States)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  2. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  3. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  4. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  5. Autoradiography and automated image analysis

    International Nuclear Information System (INIS)

    Vardy, P.H.; Willard, A.G.

    1982-01-01

    Limitations with automated image analysis and the solution of problems encountered are discussed. With transmitted light, unstained plastic sections with planar profiles should be used. Stains potentiate signal so that television registers grains as falsely larger areas of low light intensity. Unfocussed grains in paraffin sections will not be seen by image analysers due to change in darkness and size. With incident illumination, the use of crossed polars, oil objectives and an oil filled light trap continuous with the base of the slide will reduce glare. However this procedure so enormously attenuates the light reflected by silver grains, that detection may be impossible. Autoradiographs should then be photographed and the negative images of silver grains on film analysed automatically using transmitted light

  6. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the peaks ...

  7. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    Abstract. We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the ...

  8. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  9. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  10. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    Science.gov (United States)

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  11. Automated Segmentation of Cardiac Magnetic Resonance Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Nilsson, Jens Chr.; Grønning, Bjørn A.

    2001-01-01

    Magnetic resonance imaging (MRI) has been shown to be an accurate and precise technique to assess cardiac volumes and function in a non-invasive manner and is generally considered to be the current gold-standard for cardiac imaging [1]. Measurement of ventricular volumes, muscle mass and function...... is based on determination of the left-ventricular endocardial and epicardial borders. Since manual border detection is laborious, automated segmentation is highly desirable as a fast, objective and reproducible alternative. Automated segmentation will thus enhance comparability between and within cardiac...... studies and increase accuracy by allowing acquisition of thinner MRI-slices. This abstract demonstrates that statistical models of shape and appearance, namely the deformable models: Active Appearance Models, can successfully segment cardiac MRIs....

  12. Automated image segmentation using information theory

    International Nuclear Information System (INIS)

    Hibbard, L.S.

    2001-01-01

    Full text: Our development of automated contouring of CT images for RT planning is based on maximum a posteriori (MAP) analyses of region textures, edges, and prior shapes, and assumes stationary Gaussian distributions for voxel textures and contour shapes. Since models may not accurately represent image data, it would be advantageous to compute inferences without relying on models. The relative entropy (RE) from information theory can generate inferences based solely on the similarity of probability distributions. The entropy of a distribution of a random variable X is defined as -Σ x p(x)log 2 p(x) for all the values x which X may assume. The RE (Kullback-Liebler divergence) of two distributions p(X), q(X), over X is Σ x p(x)log 2 {p(x)/q(x)}. The RE is a kind of 'distance' between p,q, equaling zero when p=q and increasing as p,q are more different. Minimum-error MAP and likelihood ratio decision rules have RE equivalents: minimum error decisions obtain with functions of the differences between REs of compared distributions. One applied result is the contour ideally separating two regions is that which maximizes the relative entropy of the two regions' intensities. A program was developed that automatically contours the outlines of patients in stereotactic headframes, a situation most often requiring manual drawing. The relative entropy of intensities inside the contour (patient) versus outside (background) was maximized by conjugate gradient descent over the space of parameters of a deformable contour. shows the computed segmentation of a patient from headframe backgrounds. This program is particularly useful for preparing images for multimodal image fusion. Relative entropy and allied measures of distribution similarity provide automated contouring criteria that do not depend on statistical models of image data. This approach should have wide utility in medical image segmentation applications. Copyright (2001) Australasian College of Physical Scientists and

  13. Automated landmark-guided deformable image registration

    International Nuclear Information System (INIS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. (paper)

  14. Automated landmark-guided deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  15. Automated Quality Assurance Applied to Mammographic Imaging

    Directory of Open Access Journals (Sweden)

    Anne Davis

    2002-07-01

    Full Text Available Quality control in mammography is based upon subjective interpretation of the image quality of a test phantom. In order to suppress subjectivity due to the human observer, automated computer analysis of the Leeds TOR(MAM test phantom is investigated. Texture analysis via grey-level co-occurrence matrices is used to detect structures in the test object. Scoring of the substructures in the phantom is based on grey-level differences between regions and information from grey-level co-occurrence matrices. The results from scoring groups of particles within the phantom are presented.

  16. Automated Aesthetic Analysis of Photographic Images.

    Science.gov (United States)

    Aydın, Tunç Ozan; Smolic, Aljoscha; Gross, Markus

    2015-01-01

    We present a perceptually calibrated system for automatic aesthetic evaluation of photographic images. Our work builds upon the concepts of no-reference image quality assessment, with the main difference being our focus on rating image aesthetic attributes rather than detecting image distortions. In contrast to the recent attempts on the highly subjective aesthetic judgment problems such as binary aesthetic classification and the prediction of an image's overall aesthetics rating, our method aims on providing a reliable objective basis of comparison between aesthetic properties of different photographs. To that end our system computes perceptually calibrated ratings for a set of fundamental and meaningful aesthetic attributes, that together form an "aesthetic signature" of an image. We show that aesthetic signatures can still be used to improve upon the current state-of-the-art in automatic aesthetic judgment, but also enable interesting new photo editing applications such as automated aesthetic analysis, HDR tone mapping evaluation, and providing aesthetic feedback during multi-scale contrast manipulation.

  17. Automated Image Analysis Corrosion Working Group Update: February 1, 2018

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-01

    These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).

  18. Automated mapping of the intertidal beach from video images

    NARCIS (Netherlands)

    Uunk, L.; Uunk, L.; Wijnberg, Kathelijne Mariken; Morelissen, R.; Morelissen, R.

    2010-01-01

    This paper presents a fully automated procedure to derive the intertidal beach bathymetry on a daily basis from video images of low-sloping beaches that are characterised by the intermittent emergence of intertidal bars. Bathymetry data are obtained by automated and repeated mapping of shorelines

  19. Automated image analysis of the pathological lung in CT

    NARCIS (Netherlands)

    Sluimer, Ingrid Christine

    2005-01-01

    The general objective of the thesis is automation of the analysis of the pathological lung from CT images. Specifically, we aim for automated detection and classification of abnormalities in the lung parenchyma. We first provide a review of computer analysis techniques applied to CT of the

  20. Automated feature extraction and classification from image sources

    Science.gov (United States)

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  1. Automated Detection of Optic Disc in Fundus Images

    Science.gov (United States)

    Burman, R.; Almazroa, A.; Raahemifar, K.; Lakshminarayanan, V.

    Optic disc (OD) localization is an important preprocessing step in the automated image detection of fundus image infected with glaucoma. An Interval Type-II fuzzy entropy based thresholding scheme along with Differential Evolution (DE) is applied to determine the location of the OD in the right of left eye retinal fundus image. The algorithm, when applied to 460 fundus images from the MESSIDOR dataset, shows a success rate of 99.07 % for 217 normal images and 95.47 % for 243 pathological images. The mean computational time is 1.709 s for normal images and 1.753 s for pathological images. These results are important for automated detection of glaucoma and for telemedicine purposes.

  2. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Directory of Open Access Journals (Sweden)

    Oscar Beijbom

    Full Text Available Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.

  3. Automated diabetic retinopathy imaging in Indian eyes: A pilot study

    Directory of Open Access Journals (Sweden)

    Rupak Roy

    2014-01-01

    Full Text Available Aim: To evaluate the efficacy of an automated retinal image grading system in diabetic retinopathy (DR screening. Materials and Methods: Color fundus images of patients of a DR screening project were analyzed for the purpose of the study. For each eye two set of images were acquired, one centerd on the disk and the other centerd on the macula. All images were processed by automated DR screening software (Retmarker. The results were compared to ophthalmologist grading of the same set of photographs. Results: 5780 images of 1445 patients were analyzed. Patients were screened into two categories DR or no DR. Image quality was high, medium and low in 71 (4.91%, 1117 (77.30% and 257 (17.78% patients respectively. Specificity and sensitivity for detecting DR in the high, medium and low group were (0.59, 0.91; (0.11, 0.95 and (0.93, 0.14. Conclusion: Automated retinal image screening system for DR had a high sensitivity in high and medium quality images. Automated DR grading software′s hold promise in future screening programs.

  4. Automated identification of animal species in camera trap images

    NARCIS (Netherlands)

    Yu, X.; Wang, J.; Kays, R.; Jansen, P.A.; Wang, T.; Huang, T.

    2013-01-01

    Image sensors are increasingly being used in biodiversity monitoring, with each study generating many thousands or millions of pictures. Efficiently identifying the species captured by each image is a critical challenge for the advancement of this field. Here, we present an automated species

  5. Automated Registration Of Images From Multiple Sensors

    Science.gov (United States)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  6. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  7. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  8. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    Automatically enhancing contrast of an image has been a challenging task since the digital image can represent variety of scene types. Trifonov et al (2001) performed automatic contrast enhancement by automatically determining the measure of central tendency of the brightness histogram of an image and shifting and ...

  9. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  10. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  11. Automated Acquisition and Analysis of Digital Radiographic Images

    International Nuclear Information System (INIS)

    Poland, R.

    1999-01-01

    Engineers at the Savannah River Technology Center have designed, built, and installed a fully automated small field-of-view, lens-coupled, digital radiography imaging system. The system is installed in one of the Savannah River Site''s production facilities to be used for the evaluation of production components. Custom software routines developed for the system automatically acquire, enhance, and diagnostically evaluate critical geometric features of various components that have been captured radiographically. Resolution of the digital radiograms and accuracy of the acquired measurements approaches 0.001 inches. To date, there has been zero deviation in measurement repeatability. The automated image acquisition methodology will be discussed, unique enhancement algorithms will be explained, and the automated routines for measuring the critical component features will be presented. An additional feature discussed is the independent nature of the modular software components, which allows images to be automatically acquired, processed, and evaluated by the computer in the background, while the operator reviews other images on the monitor. System components were also a key in gaining the required image resolution. System factors such as scintillator selection, x-ray source energy, optical components and layout, as well as geometric unsharpness issues are considered in the paper. Finally the paper examines the numerous quality improvement factors and cost saving advantages that will be realized at the Savannah River Site due to the implementation of the Automated Pinch Weld Analysis System (APWAS)

  12. Automated ion imaging with the NanoSIMS ion microprobe

    Science.gov (United States)

    Gröner, E.; Hoppe, P.

    2006-07-01

    Automated ion imaging systems developed for Cameca IMS3f and IMS6f ion microprobes are very useful for the analysis of large numbers of presolar dust grains, in particular with respect to the identification of rare types of presolar grains. The application of these systems is restricted to the study of micrometer-sized grains, thereby by-passing the major fraction of presolar grains which are sub-micrometer in size. The new generation Cameca NanoSIMS 50 ion microprobe combines high spatial resolution, high sensitivity, and simultaneous detection of up to 6 isotopes which makes the NanoSIMS an unprecedented tool for the analysis of presolar materials. Here, we report on the development of a fully automated ion imaging system for the NanoSIMS at MPI for Chemistry in order to extend its analytical capabilities further. The ion imaging consists of five steps: (i) Removal of surface contamination on the area of interest. (ii) Secondary ion image acquisition of up to 5 isotopes in multi-detection. (iii) Automated particle recognition in a pre-selected image. (iv) Automated measurement of all recognised particles with appropriate raster sizes and measurement times. (v) Stage movement to new area and repetition of steps (ii)-(iv).

  13. Automated ion imaging with the NanoSIMS ion microprobe

    International Nuclear Information System (INIS)

    Groener, E.; Hoppe, P.

    2006-01-01

    Automated ion imaging systems developed for Cameca IMS3f and IMS6f ion microprobes are very useful for the analysis of large numbers of presolar dust grains, in particular with respect to the identification of rare types of presolar grains. The application of these systems is restricted to the study of micrometer-sized grains, thereby by-passing the major fraction of presolar grains which are sub-micrometer in size. The new generation Cameca NanoSIMS 50 ion microprobe combines high spatial resolution, high sensitivity, and simultaneous detection of up to 6 isotopes which makes the NanoSIMS an unprecedented tool for the analysis of presolar materials. Here, we report on the development of a fully automated ion imaging system for the NanoSIMS at MPI for Chemistry in order to extend its analytical capabilities further. The ion imaging consists of five steps: (i) Removal of surface contamination on the area of interest. (ii) Secondary ion image acquisition of up to 5 isotopes in multi-detection. (iii) Automated particle recognition in a pre-selected image. (iv) Automated measurement of all recognised particles with appropriate raster sizes and measurement times. (v) Stage movement to new area and repetition of steps (ii)-(iv)

  14. Automated image registration for FDOPA PET studies

    International Nuclear Information System (INIS)

    Kang-Ping Lin; Sung-Cheng Huang, Dan-Chu Yu; Melega, W.; Barrio, J.R.; Phelps, M.E.

    1996-01-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention. (author)

  15. FULLY AUTOMATED IMAGE ORIENTATION IN THE ABSENCE OF TARGETS

    Directory of Open Access Journals (Sweden)

    C. Stamatopoulos

    2012-07-01

    Full Text Available Automated close-range photogrammetric network orientation has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO and subsequent spatial resection of the images. Over the past decade, automated orientation via feature-based matching (FBM techniques has attracted renewed research attention in both the photogrammetry and computer vision (CV communities. This is largely due to advances made towards the goal of automated relative orientation of multi-image networks covering untargetted (markerless objects. There are now a number of CV-based algorithms, with accompanying open-source software, that can achieve multi-image orientation within narrow-baseline networks. From a photogrammetric standpoint, the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown, while the number of outliers within the image matching and triangulation is large, and generally too large to allow relative orientation (RO via the commonly used coplanarity equations. On the other hand, there are few examples within the photogrammetric research field of automated markerless camera calibration to metric tolerances, and these too are restricted to narrow-baseline, low-convergence imaging geometry. The objective addressed in this paper is markerless automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. By wide-baseline we imply convergent multi-image configurations with convergence angles of up to around 90°. An associated aim is provision of a fast, fully automated process, which can be performed without user intervention. For this purpose, various algorithms require optimisation to allow parallel processing utilising multiple PC cores and graphics processing units (GPUs.

  16. Automated designation of tie-points for image-to-image coregistration.

    Science.gov (United States)

    R.E. Kennedy; W.B. Cohen

    2003-01-01

    Image-to-image registration requires identification of common points in both images (image tie-points: ITPs). Here we describe software implementing an automated, area-based technique for identifying ITPs. The ITP software was designed to follow two strategies: ( I ) capitalize on human knowledge and pattern recognition strengths, and (2) favour robustness in many...

  17. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  18. An automated vessel segmentation of retinal images using multiscale vesselness

    International Nuclear Information System (INIS)

    Ben Abdallah, M.; Malek, J.; Tourki, R.; Krissian, K.

    2011-01-01

    The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. In this paper, we introduce an implementation of the anisotropic diffusion which allows reducing the noise and better preserving small structures like vessels in 2D images. A vessel detection filter, based on a multi-scale vesselness function, is then applied to enhance vascular structures.

  19. Automated hybridization/imaging device for fluorescent multiplex DNA sequencing

    Science.gov (United States)

    Weiss, Robert B.; Kimball, Alvin W.; Gesteland, Raymond F.; Ferguson, F. Mark; Dunn, Diane M.; Di Sera, Leonard J.; Cherry, Joshua L.

    1995-01-01

    A method is disclosed for automated multiplex sequencing of DNA with an integrated automated imaging hybridization chamber system. This system comprises an hybridization chamber device for mounting a membrane containing size-fractionated multiplex sequencing reaction products, apparatus for fluid delivery to the chamber device, imaging apparatus for light delivery to the membrane and image recording of fluorescence emanating from the membrane while in the chamber device, and programmable controller apparatus for controlling operation of the system. The multiplex reaction products are hybridized with a probe, then an enzyme (such as alkaline phosphatase) is bound to a binding moiety on the probe, and a fluorogenic substrate (such as a benzothiazole derivative) is introduced into the chamber device by the fluid delivery apparatus. The enzyme converts the fluorogenic substrate into a fluorescent product which, when illuminated in the chamber device with a beam of light from the imaging apparatus, excites fluorescence of the fluorescent product to produce a pattern of hybridization. The pattern of hybridization is imaged by a CCD camera component of the imaging apparatus to obtain a series of digital signals. These signals are converted by the controller apparatus into a string of nucleotides corresponding to the nucleotide sequence an automated sequence reader. The method and apparatus are also applicable to other membrane-based applications such as colony and plaque hybridization and Southern, Northern, and Western blots.

  20. An Automated, Image Processing System for Concrete Evaluation

    International Nuclear Information System (INIS)

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-01-01

    Allied Signal Federal Manufacturing ampersand Technologies (FM ampersand T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of ''pixels'' which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented

  1. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  2. Automated Pointing of Cardiac Imaging Catheters.

    Science.gov (United States)

    Loschak, Paul M; Brattain, Laura J; Howe, Robert D

    2013-12-31

    Intracardiac echocardiography (ICE) catheters enable high-quality ultrasound imaging within the heart, but their use in guiding procedures is limited due to the difficulty of manually pointing them at structures of interest. This paper presents the design and testing of a catheter steering model for robotic control of commercial ICE catheters. The four actuated degrees of freedom (4-DOF) are two catheter handle knobs to produce bi-directional bending in combination with rotation and translation of the handle. An extra degree of freedom in the system allows the imaging plane (dependent on orientation) to be directed at an object of interest. A closed form solution for forward and inverse kinematics enables control of the catheter tip position and the imaging plane orientation. The proposed algorithms were validated with a robotic test bed using electromagnetic sensor tracking of the catheter tip. The ability to automatically acquire imaging targets in the heart may improve the efficiency and effectiveness of intracardiac catheter interventions by allowing visualization of soft tissue structures that are not visible using standard fluoroscopic guidance. Although the system has been developed and tested for manipulating ICE catheters, the methods described here are applicable to any long thin tendon-driven tool (with single or bi-directional bending) requiring accurate tip position and orientation control.

  3. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  4. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  5. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  6. Tools for automating the imaging of zebrafish larvae.

    Science.gov (United States)

    Pulak, Rock

    2016-03-01

    The VAST BioImager system is a set of tools developed for zebrafish researchers who require the collection of images from a large number of 2-7 dpf zebrafish larvae. The VAST BioImager automates larval handling, positioning and orientation tasks. Color images at about 10 μm resolution are collected from the on-board camera of the system. If images of greater resolution and detail are required, this system is mounted on an upright microscope, such as a confocal or fluorescence microscope, to utilize their capabilities. The system loads a larvae, positions it in view of the camera, determines orientation using pattern recognition analysis, and then more precisely positions to user-defined orientation for optimal imaging of any desired tissue or organ system. Multiple images of the same larva can be collected. The specific part of each larva and the desired orientation and position is identified by the researcher and an experiment defining the settings and a series of steps can be saved and repeated for imaging of subsequent larvae. The system captures images, then ejects and loads another larva from either a bulk reservoir, a well of a 96 well plate using the LP Sampler, or individually targeted larvae from a Petri dish or other container using the VAST Pipettor. Alternative manual protocols for handling larvae for image collection are tedious and time consuming. The VAST BioImager automates these steps to allow for greater throughput of assays and screens requiring high-content image collection of zebrafish larvae such as might be used in drug discovery and toxicology studies. Copyright © 2015 The Author. Published by Elsevier Inc. All rights reserved.

  7. An automated and simple method for brain MR image extraction

    Directory of Open Access Journals (Sweden)

    Zhu Zixin

    2011-09-01

    Full Text Available Abstract Background The extraction of brain tissue from magnetic resonance head images, is an important image processing step for the analyses of neuroimage data. The authors have developed an automated and simple brain extraction method using an improved geometric active contour model. Methods The method uses an improved geometric active contour model which can not only solve the boundary leakage problem but also is less sensitive to intensity inhomogeneity. The method defines the initial function as a binary level set function to improve computational efficiency. The method is applied to both our data and Internet brain MR data provided by the Internet Brain Segmentation Repository. Results The results obtained from our method are compared with manual segmentation results using multiple indices. In addition, the method is compared to two popular methods, Brain extraction tool and Model-based Level Set. Conclusions The proposed method can provide automated and accurate brain extraction result with high efficiency.

  8. Image mosaicing for automated pipe scanning

    International Nuclear Information System (INIS)

    Summan, Rahul; Dobie, Gordon; Guarato, Francesco; MacLeod, Charles; Marshall, Stephen; Pierce, Gareth; Forrester, Cailean; Bolton, Gary

    2015-01-01

    Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice

  9. Image mosaicing for automated pipe scanning

    Energy Technology Data Exchange (ETDEWEB)

    Summan, Rahul, E-mail: rahul.summan@strath.ac.uk; Dobie, Gordon, E-mail: rahul.summan@strath.ac.uk; Guarato, Francesco, E-mail: rahul.summan@strath.ac.uk; MacLeod, Charles, E-mail: rahul.summan@strath.ac.uk; Marshall, Stephen, E-mail: rahul.summan@strath.ac.uk; Pierce, Gareth [Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Forrester, Cailean [Inspectahire Instrument Company Ltd, Units 10 -12 Whitemyres Business Centre, Whitemyres Avenue, Aberdeen, AB16 6HQ (United Kingdom); Bolton, Gary [National Nuclear Laboratory, Chadwick House, Warrington Road, Birchwood Park, Warrington, WA3 6AE (United Kingdom)

    2015-03-31

    Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice.

  10. Automated imaging dark adaptometer for investigating hereditary retinal degenerations

    Science.gov (United States)

    Azevedo, Dario F. G.; Cideciyan, Artur V.; Regunath, Gopalakrishnan; Jacobson, Samuel G.

    1995-05-01

    We designed and built an automated imaging dark adaptometer (AIDA) to increase accuracy, reliability, versatility and speed of dark adaptation testing in patients with hereditary retinal degenerations. AIDA increases test accuracy by imaging the ocular fundus for precise positioning of bleaching and stimulus lights. It improves test reliability by permitting continuous monitoring of patient fixation. Software control of stimulus presentation provides broad testing versatility without sacrificing speed. AIDA promises to facilitate the measurement of dark adaptation in studies of the pathophysiology of retinal degenerations and in future treatment trials of these diseases.

  11. An automated system for whole microscopic image acquisition and analysis.

    Science.gov (United States)

    Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús

    2014-09-01

    The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.

  12. Usefulness of automated biopsy guns in image-guided biopsy

    International Nuclear Information System (INIS)

    Lee, Jung Hyung; Rhee, Chang Soo; Lee, Sung Moon; Kim, Hong; Woo, Sung Ku; Suh, Soo Jhi

    1994-01-01

    To evaluate the usefulness of automated biopsy guns in image-guided biopsy of lung, liver, pancreas and other organs. Using automated biopsy devices, 160 biopsies of variable anatomic sites were performed: Biopsies were performed under ultrasonographic(US) guidance in 95 and computed tomographic (CT) guidance in 65. We retrospectively analyzed histologic results and complications. Specimens were adequate for histopathologic diagnosis in 143 of the 160 patients(89.4%)-Diagnostic tissue was obtained in 130 (81.3%), suggestive tissue obtained in 13(8.1%), and non-diagnostic tissue was obtained in 14(8.7%). Inadequate tissue was obtained in only 3(1.9%). There was no statistically significant difference between US-guided and CT-guided percutaneous biopsy. There was no occurrence of significant complication. We have experienced mild complications in only 5 patients-2 hematuria and 2 hematochezia in transrectal prostatic biopsy, and 1 minimal pneumothorax in CT-guided percutaneous lung biopsy. All of them were resolved spontaneously. The image-guided biopsy using the automated biopsy gun was a simple, safe and accurate method of obtaining adequate specimen for the histopathologic diagnosis

  13. Automated curved planar reformation of 3D spine images

    International Nuclear Information System (INIS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  14. AUTOMATED DATA ANALYSIS FOR CONSECUTIVE IMAGES FROM DROPLET COMBUSTION EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Christopher Lee Dembia

    2012-09-01

    Full Text Available A simple automated image analysis algorithm has been developed that processes consecutive images from high speed, high resolution digital images of burning fuel droplets. The droplets burn under conditions that promote spherical symmetry. The algorithm performs the tasks of edge detection of the droplet’s boundary using a grayscale intensity threshold, and shape fitting either a circle or ellipse to the droplet’s boundary. The results are compared to manual measurements of droplet diameters done with commercial software. Results show that it is possible to automate data analysis for consecutive droplet burning images even in the presence of a significant amount of noise from soot formation. An adaptive grayscale intensity threshold provides the ability to extract droplet diameters for the wide range of noise encountered. In instances where soot blocks portions of the droplet, the algorithm manages to provide accurate measurements if a circle fit is used instead of an ellipse fit, as an ellipse can be too accommodating to the disturbance.

  15. Automated Processing of Zebrafish Imaging Data: A Survey

    Science.gov (United States)

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  16. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...... in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results: Three E. coli strains...

  17. Automated rice leaf disease detection using color image analysis

    Science.gov (United States)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  18. Crowdsourcing and Automated Retinal Image Analysis for Diabetic Retinopathy.

    Science.gov (United States)

    Mudie, Lucy I; Wang, Xueyang; Friedman, David S; Brady, Christopher J

    2017-09-23

    As the number of people with diabetic retinopathy (DR) in the USA is expected to increase threefold by 2050, the need to reduce health care costs associated with screening for this treatable disease is ever present. Crowdsourcing and automated retinal image analysis (ARIA) are two areas where new technology has been applied to reduce costs in screening for DR. This paper reviews the current literature surrounding these new technologies. Crowdsourcing has high sensitivity for normal vs abnormal images; however, when multiple categories for severity of DR are added, specificity is reduced. ARIAs have higher sensitivity and specificity, and some commercial ARIA programs are already in use. Deep learning enhanced ARIAs appear to offer even more improvement in ARIA grading accuracy. The utilization of crowdsourcing and ARIAs may be a key to reducing the time and cost burden of processing images from DR screening.

  19. Automated tracking of the vascular tree on DSA images

    International Nuclear Information System (INIS)

    Alperin, N.; Hoffmann, K.R.; Doi, K.

    1990-01-01

    Determination of the vascular tree structure is important for reconstruction of three-dimensional vascular tree from biplane images, for assessment of the significance of a lesion, and for planning treatment for arteriovenous malformation. To automate these analyses, the authors of this paper are developing a method to determine the vascular tree structure from digital subtraction angiography (DSA) images. The authors have previously described a vessel tracking method, based on the double-square-box technique. To improve the tracking accuracy, they have developed and integrated with the previous method a connectivity test and guided-sector-search technique. The connectivity test, based on region growing techniques, eliminates tracking across nonvessel regions. The guided sector-search method incorporates information from a larger are of the image to guide the search for the next tracking point

  20. Automated Identification of Fiducial Points on 3D Torso Images

    Directory of Open Access Journals (Sweden)

    Manas M. Kawale

    2013-01-01

    Full Text Available Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D coordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship.

  1. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  2. Automated and unsupervised detection of malarial parasites in microscopic images

    Directory of Open Access Journals (Sweden)

    Purwar Yashasvi

    2011-12-01

    Full Text Available Abstract Background Malaria is a serious infectious disease. According to the World Health Organization, it is responsible for nearly one million deaths each year. There are various techniques to diagnose malaria of which manual microscopy is considered to be the gold standard. However due to the number of steps required in manual assessment, this diagnostic method is time consuming (leading to late diagnosis and prone to human error (leading to erroneous diagnosis, even in experienced hands. The focus of this study is to develop a robust, unsupervised and sensitive malaria screening technique with low material cost and one that has an advantage over other techniques in that it minimizes human reliance and is, therefore, more consistent in applying diagnostic criteria. Method A method based on digital image processing of Giemsa-stained thin smear image is developed to facilitate the diagnostic process. The diagnosis procedure is divided into two parts; enumeration and identification. The image-based method presented here is designed to automate the process of enumeration and identification; with the main advantage being its ability to carry out the diagnosis in an unsupervised manner and yet have high sensitivity and thus reducing cases of false negatives. Results The image based method is tested over more than 500 images from two independent laboratories. The aim is to distinguish between positive and negative cases of malaria using thin smear blood slide images. Due to the unsupervised nature of method it requires minimal human intervention thus speeding up the whole process of diagnosis. Overall sensitivity to capture cases of malaria is 100% and specificity ranges from 50-88% for all species of malaria parasites. Conclusion Image based screening method will speed up the whole process of diagnosis and is more advantageous over laboratory procedures that are prone to errors and where pathological expertise is minimal. Further this method

  3. Automated extraction of chemical structure information from digital raster images

    Science.gov (United States)

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research

  4. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  5. Automated exploration of the radio plasma imager data

    Science.gov (United States)

    Galkin, Ivan; Reinisch, Bodo; Grinstein, Georges; Khmyrov, Grigori; Kozlov, Alexander; Huang, Xueqin; Fung, Shing

    2004-12-01

    As research instruments with large information capacities become a reality, automated systems for intelligent data analysis become a necessity. Scientific archives containing huge volumes of data preclude manual manipulation or intervention and require automated exploration and mining that can at least preclassify information in categories. The large data set from the radio plasma imager (RPI) instrument on board the IMAGE satellite shows a critical need for such exploration in order to identify and archive features of interest in the volumes of visual information. In this research we have developed such a preclassifier through a model of preattentive vision capable of detecting and extracting traces of echoes from the RPI plasmagrams. The overall design of our model complies with Marr's paradigm of vision, where elements of increasing perceptual strength are built bottom up under the Gestalt constraints of good continuation and smoothness. The specifics of the RPI data, however, demanded extension of this paradigm to achieve greater robustness for signature analysis. Our preattentive model now employs a feedback neural network that refines alignment of the oriented edge elements (edgels) detected in the plasmagram image by subjecting them to collective global-scale optimization. The level of interaction between the oriented edgels is determined by their distance and mutual orientation in accordance with the Yen and Finkel model of the striate cortex that encompasses findings in psychophysical studies of human vision. The developed models have been implemented in an operational system "CORPRAL" (Cognitive Online RPI Plasmagram Ranking Algorithm) that currently scans daily submissions of the RPI plasmagrams for the presence of echo traces. Qualifying plasmagrams are tagged in the mission database, making them available for a variety of queries. We discuss CORPRAL performance and its impact on scientific analysis of RPI data.

  6. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  7. Automated regional behavioral analysis for human brain images.

    Science.gov (United States)

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images. Behavioral and coordinate data were taken from the BrainMap database (http://www.brainmap.org/), which documents over 20 years of published functional brain imaging studies. A brain region of interest (ROI) for behavioral analysis can be defined in functional images, anatomical images or brain atlases, if images are spatially normalized to MNI or Talairach standards. Results of behavioral analysis are presented for each of BrainMap's 51 behavioral sub-domains spanning five behavioral domains (Action, Cognition, Emotion, Interoception, and Perception). For each behavioral sub-domain the fraction of coordinates falling within the ROI was computed and compared with the fraction expected if coordinates for the behavior were not clustered, i.e., uniformly distributed. When the difference between these fractions is large behavioral association is indicated. A z-score ≥ 3.0 was used to designate statistically significant behavioral association. The left-right symmetry of ~100K activation foci was evaluated by hemisphere, lobe, and by behavioral sub-domain. Results highlighted the classic left-side dominance for language while asymmetry for most sub-domains (~75%) was not statistically significant. Use scenarios were presented for anatomical ROIs from the Harvard-Oxford cortical (HOC) brain atlas, functional ROIs from statistical parametric maps in a TMS-PET study, a task-based fMRI study, and ROIs from the ten "major representative" functional networks in a previously published resting state fMRI study. Statistically significant behavioral findings for these use scenarios were consistent with published behaviors for associated anatomical and functional regions.

  8. Automated Recognition of 3D Features in GPIR Images

    Science.gov (United States)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  9. Automated region definition for cardiac nitrogen-13-ammonia PET imaging.

    Science.gov (United States)

    Muzik, O; Beanlands, R; Wolfe, E; Hutchins, G D; Schwaiger, M

    1993-02-01

    In combination with PET, the tracer 13N-ammonia can be employed for the noninvasive quantification of myocardial perfusion at rest and after pharmacological stress. The purpose of this study was to develop an analysis method for the quantification of regional myocardial blood flow in the clinical setting. The algorithm includes correction for patient motion, an automated definition of multiple regions and display of absolute flows in polar map format. The effects of partial volume and blood to tissue cross-contamination were accounted for by optimizing the radial position of regions to meet fundamental assumptions of the kinetic model. In order to correct for motion artifacts, the myocardial displacement was manually determined based on edge-enhanced images. The obtained results exhibit the capability of the presented algorithm to noninvasively assess regional myocardial perfusion in the clinical environment.

  10. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Automated Image Analysis of Offshore Infrastructure Marine Biofouling

    Directory of Open Access Journals (Sweden)

    Kate Gormley

    2018-01-01

    Full Text Available In the UK, some of the oldest oil and gas installations have been in the water for over 40 years and have considerable colonisation by marine organisms, which may lead to both industry challenges and/or potential biodiversity benefits (e.g., artificial reefs. The project objective was to test the use of an automated image analysis software (CoralNet on images of marine biofouling from offshore platforms on the UK continental shelf, with the aim of (i training the software to identify the main marine biofouling organisms on UK platforms; (ii testing the software performance on 3 platforms under 3 different analysis criteria (methods A–C; (iii calculating the percentage cover of marine biofouling organisms and (iv providing recommendations to industry. Following software training with 857 images, and testing of three platforms, results showed that diversity of the three platforms ranged from low (in the central North Sea to moderate (in the northern North Sea. The two central North Sea platforms were dominated by the plumose anemone Metridium dianthus; and the northern North Sea platform showed less obvious species domination. Three different analysis criteria were created, where the method of selection of points, number of points assessed and confidence level thresholds (CT varied: (method A random selection of 20 points with CT 80%, (method B stratified random of 50 points with CT of 90% and (method C a grid approach of 100 points with CT of 90%. Performed across the three platforms, the results showed that there were no significant differences across the majority of species and comparison pairs. No significant difference (across all species was noted between confirmed annotations methods (A, B and C. It was considered that the software performed well for the classification of the main fouling species in the North Sea. Overall, the study showed that the use of automated image analysis software may enable a more efficient and consistent

  12. Automated image analysis of microstructure changes in metal alloys

    Science.gov (United States)

    Hoque, Mohammed E.; Ford, Ralph M.; Roth, John T.

    2005-02-01

    The ability to identify and quantify changes in the microstructure of metal alloys is valuable in metal cutting and shaping applications. For example, certain metals, after being cryogenically and electrically treated, have shown large increases in their tool life when used in manufacturing cutting and shaping processes. However, the mechanisms of microstructure changes in alloys under various treatments, which cause them to behave differently, are not yet fully understood. The changes are currently evaluated in a semi-quantitative manner by visual inspection of images of the microstructure. This research applies pattern recognition technology to quantitatively measure the changes in microstructure and to validate the initial assertion of increased tool life under certain treatments. Heterogeneous images of aluminum and tungsten carbide of various categories were analyzed using a process including background correction, adaptive thresholding, edge detection and other algorithms for automated analysis of microstructures. The algorithms are robust across a variety of operating conditions. This research not only facilitates better understanding of the effects of electric and cryogenic treatment of these materials, but also their impact on tooling and metal-cutting processes.

  13. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Baofeng Li

    2009-01-01

    Full Text Available Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  14. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Li Baofeng

    2009-01-01

    Full Text Available Abstract Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  15. Automated X-ray image analysis for cargo security: Critical review and future promise.

    Science.gov (United States)

    Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D

    2017-01-01

    We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.

  16. Automated force volume image processing for biological samples.

    Directory of Open Access Journals (Sweden)

    Pavel Polyakov

    2011-04-01

    Full Text Available Atomic force microscopy (AFM has now become a powerful technique for investigating on a molecular level, surface forces, nanomechanical properties of deformable particles, biomolecular interactions, kinetics, and dynamic processes. This paper specifically focuses on the analysis of AFM force curves collected on biological systems, in particular, bacteria. The goal is to provide fully automated tools to achieve theoretical interpretation of force curves on the basis of adequate, available physical models. In this respect, we propose two algorithms, one for the processing of approach force curves and another for the quantitative analysis of retraction force curves. In the former, electrostatic interactions prior to contact between AFM probe and bacterium are accounted for and mechanical interactions operating after contact are described in terms of Hertz-Hooke formalism. Retraction force curves are analyzed on the basis of the Freely Jointed Chain model. For both algorithms, the quantitative reconstruction of force curves is based on the robust detection of critical points (jumps, changes of slope or changes of curvature which mark the transitions between the various relevant interactions taking place between the AFM tip and the studied sample during approach and retraction. Once the key regions of separation distance and indentation are detected, the physical parameters describing the relevant interactions operating in these regions are extracted making use of regression procedure for fitting experiments to theory. The flexibility, accuracy and strength of the algorithms are illustrated with the processing of two force-volume images, which collect a large set of approach and retraction curves measured on a single biological surface. For each force-volume image, several maps are generated, representing the spatial distribution of the searched physical parameters as estimated for each pixel of the force-volume image.

  17. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  18. Automated processing of webcam images for phenological classification.

    Directory of Open Access Journals (Sweden)

    Ludwig Bothmann

    Full Text Available Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the

  19. Internet of things and automation of imaging: beyond representationalism

    Directory of Open Access Journals (Sweden)

    2016-09-01

    Full Text Available It is no doubt that the production of digital imagery invites for the major update of theoretical apparatus: what up until now was perceived solely or primarily as the stable representation of the world gives way to the image understood in terms of “the continuous actualization of networked data” or “networked terminal.” In my article I would like to argue that analysis of this new visual environment should not be limited to the procedures of data processing. What also invites serious investigation is acknowledging the reliance of contemporary media ecology on wireless communication which according to Adrian Mackenzie functions as “prepositions (‘at,’ ‘in,’ ‘with,’ by’, ‘between,’ ‘near,’ etc in the grammar of contemporary media” It seems especially important in the case of the imagery accompanying some instances of internet of things, where the considerable part of networked imagery is produced in a fully automated and machinic way. This crowdsourced air pollution monitoring platform consists of networked sensors transmitting signals and data which are then visualized as graphs and maps through the IoT service provider, Xively.

  20. Automated counting of bacterial colonies by image analysis.

    Science.gov (United States)

    Chiang, Pei-Ju; Tseng, Min-Jen; He, Zong-Sian; Li, Chia-Hsun

    2015-01-01

    Research on microorganisms often involves culturing as a means to determine the survival and proliferation of bacteria. The number of colonies in a culture is counted to calculate the concentration of bacteria in the original broth; however, manual counting can be time-consuming and imprecise. To save time and prevent inconsistencies, this study proposes a fully automated counting system using image processing methods. To accurately estimate the number of viable bacteria in a known volume of suspension, colonies distributing over the whole surface area of a plate, including the central and rim areas of a Petri dish are taken into account. The performance of the proposed system is compared with verified manual counts, as well as with two freely available counting software programs. Comparisons show that the proposed system is an effective method with excellent accuracy with mean value of absolute percentage error of 3.37%. A user-friendly graphical user interface is also developed and freely available for download, providing researchers in biomedicine with a more convenient instrument for the enumeration of bacterial colonies. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Automated Detection of Firearms and Knives in a CCTV Image

    Directory of Open Access Journals (Sweden)

    Michał Grega

    2016-01-01

    Full Text Available Closed circuit television systems (CCTV are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  2. Automated Detection of Firearms and Knives in a CCTV Image.

    Science.gov (United States)

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  3. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  4. Automated endoscopic navigation and advisory system from medical image

    Science.gov (United States)

    Kwoh, Chee K.; Khan, Gul N.; Gillies, Duncan F.

    1999-05-01

    , which is developed to obtain the relative depth of the colon surface in the image by assuming a point light source very close to the camera. If we assume the colon has a shape similar to a tube, then a reasonable approximation of the position of the center of the colon (lumen) will be a function of the direction in which the majority of the normal vectors of shape are pointing. The second layer is the control layer and at this level, a decision model must be built for endoscope navigation and advisory system. The system that we built is the models of probabilistic networks that create a basic, artificial intelligence system for navigation in the colon. We have constructed the probabilistic networks from correlated objective data using the maximum weighted spanning tree algorithm. In the construction of a probabilistic network, it is always assumed that the variables starting from the same parent are conditionally independent. However, this may not hold and will give rise to incorrect inferences. In these cases, we proposed the creation of a hidden node to modify the network topology, which in effect models the dependency of correlated variables, to solve the problem. The conditional probability matrices linking the hidden node to its neighbors are determined using a gradient descent method which minimizing the objective cost function. The error gradients can be treated as updating messages and ca be propagated in any direction throughout any singly connected network to adjust the network parameters. With the above two- level approach, we have been able to build an automated endoscope navigation and advisory system successfully.

  5. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  6. Microvascular glycocalyx dimension estimated by automated SDF imaging is not related to cardiovascular disease

    NARCIS (Netherlands)

    Amraoui, Fouad; Olde Engberink, Rik H. G.; van Gorp, Jacqueline; Ramdani, Amal; Vogt, Liffert; van den Born, Bert-Jan H.

    2014-01-01

    The EG regulates vascular homeostasis and has anti-atherogenic properties. SDF imaging allows for noninvasive visualization of microvessels and automated estimation of EG dimensions. We aimed to assess whether microcirculatory EG dimension is related to cardiovascular disease. Sublingual EG

  7. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    OpenAIRE

    Staley, Tim D.; Anderson, Gemma E.

    2015-01-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of severa...

  8. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  9. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    Science.gov (United States)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  10. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    Science.gov (United States)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  11. An Automated Self-Learning Quantification System to Identify Visible Areas in Capsule Endoscopy Images.

    Science.gov (United States)

    Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-08-01

    Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.

  12. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik

    2007-01-01

    standard' image interpretation. The training group was used in the development of the automated method. The image processing techniques included algorithms for segmentation of the lungs based on the CT images and detection of lesions in the PET images. Lung boundaries from the CT images were used...... cancer. METHODS: A total of 87 patients who underwent PET/CT examinations due to suspected lung cancer comprised the training group. The test group consisted of PET/CT images from 49 patients suspected with lung cancer. The consensus interpretations by two experienced physicians were used as the 'gold......PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...

  13. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    Science.gov (United States)

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-11-23

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation.

  14. Quantization of polyphenolic compounds in histological sections of grape berries by automated color image analysis

    Science.gov (United States)

    Clement, Alain; Vigouroux, Bertnand

    2003-04-01

    We present new results in applied color image analysis that put in evidence the significant influence of soil on localization and appearance of polyphenols in grapes. These results have been obtained with a new unsupervised classification algorithm founded on hierarchical analysis of color histograms. The process is automated thanks to a software platform we developed specifically for color image analysis and it's applications.

  15. Fluorescence In Situ Hybridization (FISH Signal Analysis Using Automated Generated Projection Images

    Directory of Open Access Journals (Sweden)

    Xingwei Wang

    2012-01-01

    Full Text Available Fluorescence in situ hybridization (FISH tests provide promising molecular imaging biomarkers to more accurately and reliably detect and diagnose cancers and genetic disorders. Since current manual FISH signal analysis is low-efficient and inconsistent, which limits its clinical utility, developing automated FISH image scanning systems and computer-aided detection (CAD schemes has been attracting research interests. To acquire high-resolution FISH images in a multi-spectral scanning mode, a huge amount of image data with the stack of the multiple three-dimensional (3-D image slices is generated from a single specimen. Automated preprocessing these scanned images to eliminate the non-useful and redundant data is important to make the automated FISH tests acceptable in clinical applications. In this study, a dual-detector fluorescence image scanning system was applied to scan four specimen slides with FISH-probed chromosome X. A CAD scheme was developed to detect analyzable interphase cells and map the multiple imaging slices recorded FISH-probed signals into the 2-D projection images. CAD scheme was then applied to each projection image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm, identify FISH-probed signals using a top-hat transform, and compute the ratios between the normal and abnormal cells. To assess CAD performance, the FISH-probed signals were also independently visually detected by an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots in four testing samples. The study demonstrated the feasibility of automated FISH signal analysis that applying a CAD scheme to the automated generated 2-D projection images.

  16. Automated Segmentation of Kidneys from MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease.

    Science.gov (United States)

    Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B; Torres, Vicente E; Yu, Alan S L; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P; Bae, Kyongtae T

    2016-04-07

    Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2-weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (Pkidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good

  17. Integrating two spectral imaging systems in an automated mineralogy application

    CSIR Research Space (South Africa)

    Harris, D

    2009-11-01

    Full Text Available A system for the automated analysis and sorting of mineral samples has been developed to assist in the concentration of heavy mineral samples in the diamond exploration process. These samples consist of irregularly shaped mineral grains ranging from...

  18. Automated morphometry of transgenic mouse brains in MR images

    NARCIS (Netherlands)

    Scheenstra, Alize Elske Hiltje

    2011-01-01

    Quantitative and local morphometry of mouse brain MRI is a relatively new field of research, where automated methods can be exploited to rapidly provide accurate and repeatable results. In this thesis we reviewed several existing methods and applications of quantitative morphometry to brain MR

  19. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    International Nuclear Information System (INIS)

    Karaçalı, Bilge; Vamvakidou, Alexandra P; Tözeren, Aydın

    2007-01-01

    Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development

  20. How automated image analysis techniques help scientists in species identification and classification?

    Science.gov (United States)

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  1. Automated striatal uptake analysis of 18F-FDOPA PET images applied to Parkinson's disease patients

    International Nuclear Information System (INIS)

    Chang Icheng; Lue Kunhan; Hsieh Hungjen; Liu Shuhsin; Kao, Chinhao K.

    2011-01-01

    6-[ 18 F]Fluoro-L-DOPA (FDOPA) is a radiopharmaceutical valuable for assessing the presynaptic dopaminergic function when used with positron emission tomography (PET). More specifically, the striatal-to-occipital ratio (SOR) of FDOPA uptake images has been extensively used as a quantitative parameter in these PET studies. Our aim was to develop an easy, automated method capable of performing objective analysis of SOR in FDOPA PET images of Parkinson's disease (PD) patients. Brain images from FDOPA PET studies of 21 patients with PD and 6 healthy subjects were included in our automated striatal analyses. Images of each individual were spatially normalized into an FDOPA template. Subsequently, the image slice with the highest level of basal ganglia activity was chosen among the series of normalized images. Also, the immediate preceding and following slices of the chosen image were then selected. Finally, the summation of these three images was used to quantify and calculate the SOR values. The results obtained by automated analysis were compared with manual analysis by a trained and experienced image processing technologist. The SOR values obtained from the automated analysis had a good agreement and high correlation with manual analysis. The differences in caudate, putamen, and striatum were -0.023, -0.029, and -0.025, respectively; correlation coefficients 0.961, 0.957, and 0.972, respectively. We have successfully developed a method for automated striatal uptake analysis of FDOPA PET images. There was no significant difference between the SOR values obtained from this method and using manual analysis. Yet it is an unbiased time-saving and cost-effective program and easy to implement on a personal computer. (author)

  2. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Directory of Open Access Journals (Sweden)

    Hongsheng Bi

    Full Text Available Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%. First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that

  3. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features...

  4. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    Directory of Open Access Journals (Sweden)

    Mohendra Roy

    2016-05-01

    Full Text Available Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al., we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings.

  5. Comparison of the automated evaluation of phantom mama in digital and digitalized images

    International Nuclear Information System (INIS)

    Santana, Priscila do Carmo

    2011-01-01

    Mammography is an essential tool for diagnosis and early detection of breast cancer if it is provided as a very good quality service. The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. This work compare the automated methodology for the evaluation of scanned digital images the phantom mama. By applied the DIP method techniques was possible determine geometrical and radiometric images evaluated. The evaluated parameters include circular details of low contrast, contrast ratio, spatial resolution, tumor masses, optical density and background in Phantom Mama scanned and digitized images. The both results of images were evaluated. Through this comparison was possible to demonstrate that this automated methodology is presented as a promising alternative for the reduction or elimination of subjectivity in both types of images, but the Phantom Mama present insufficient parameters for spatial resolution evaluation. (author)

  6. A Semi-automated Approach to Improve the Efficiency of Medical Imaging Segmentation for Haptic Rendering.

    Science.gov (United States)

    Banerjee, Pat; Hu, Mengqi; Kannan, Rahul; Krishnaswamy, Srinivasan

    2017-08-01

    The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as magnetic resonance imaging (MRI) or computed tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating 3D models by automating the segmentation of CT images based on the pixel contrast for integrating the interface between Sensimmer and medical imaging devices, using the volumetric approach, Hough transform method, and manual centering method. Hence, automating the process has reduced the segmentation time by 56.35% while maintaining the same accuracy of the output at ±2 voxels.

  7. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    Science.gov (United States)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  9. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Automated three-dimensional analysis of particle measurements using an optical profilometer and image analysis software.

    Science.gov (United States)

    Bullman, V

    2003-07-01

    The automated collection of topographic images from an optical profilometer coupled with existing image analysis software offers the unique ability to quantify three-dimensional particle morphology. Optional software available with most optical profilers permits automated collection of adjacent topographic images of particles dispersed onto a suitable substrate. Particles are recognized in the image as a set of continuous pixels with grey-level values above the grey level assigned to the substrate, whereas particle height or thickness is represented in the numerical differences between these grey levels. These images are loaded into remote image analysis software where macros automate image processing, and then distinguish particles for feature analysis, including standard two-dimensional measurements (e.g. projected area, length, width, aspect ratios) and third-dimensional measurements (e.g. maximum height, mean height). Feature measurements from each calibrated image are automatically added to cumulative databases and exported to a commercial spreadsheet or statistical program for further data processing and presentation. An example is given that demonstrates the superiority of quantitative three-dimensional measurements by optical profilometry and image analysis in comparison with conventional two-dimensional measurements for the characterization of pharmaceutical powders with plate-like particles.

  11. Microscopic images dataset for automation of RBCs counting

    Directory of Open Access Journals (Sweden)

    Sherif Abbas

    2015-12-01

    Full Text Available A method for Red Blood Corpuscles (RBCs counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  12. [Clinical application of automated digital image analysis for morphology review of peripheral blood leukocyte].

    Science.gov (United States)

    Xing, Ying; Yan, Xiaohua; Pu, Chengwei; Shang, Ke; Dong, Ning; Wang, Run; Wang, Jianzhong

    2016-03-01

    To explore the clinical application of automated digital image analysis in leukocyte morphology examination when review criteria of hematology analyzer are triggered. The reference range of leukocyte differentiation by automated digital image analysis was established by analyzing 304 healthy blood samples from Peking University First Hospital. Six hundred and ninty-seven blood samples from Peking University First Hospital were randomly collected from November 2013 to April 2014, complete blood cells were counted on hematology analyzer, blood smears were made and stained at the same time. Blood smears were detected by automated digital image analyzer and the results were checked (reclassification) by a staff with abundant morphology experience. The same smear was examined manually by microscope. The results by manual microscopic differentiation were used as"golden standard", and diagnostic efficiency of abnormal specimens by automated digital image analysis was calculated, including sensitivity, specificity and accuracy. The difference of abnormal leukocytes detected by two different methods was analyzed in 30 samples of hematological and infectious diseases. Specificity of identifying abnormalities of white blood cells by automated digital image analysis was more than 90% except monocyte. Sensitivity of neutrophil toxic abnormities (including Döhle body, toxic granulate and vacuolization) was 100%; sensitivity of blast cells, immature granulates and atypical lymphocytes were 91.7%, 60% to 81.5% and 61.5%, respectively. Sensitivity of leukocyte differential count was 91.8% for neutrophils, 88.5% for lymphocytes, 69.1% for monocytes, 78.9% for eosinophils and 36.3 for basophils. The positive rate of recognizing abnormal cells (blast, immature granulocyte and atypical lymphocyte) by manual microscopic method was 46.7%, 53.3% and 10%, respectively. The positive rate of automated digital image analysis was 43.3%, 60% and 10%, respectively. There was no statistic

  13. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data

    International Nuclear Information System (INIS)

    Gratama van Andel, Hugo A.F.; Meijering, Erik; Vrooman, Henri A.; Stokking, Rik; Lugt, Aad van der; Monye, Cecile de

    2006-01-01

    The aim of the study was to evaluate a new method for automated definition of a center lumen line in vessels in cardiovascular image data. This method, called VAMPIRE, is based on improved detection of vessel-like structures. A multiobserver evaluation study was conducted involving 40 tracings in clinical CTA data of carotid arteries to compare VAMPIRE with an established technique. This comparison showed that VAMPIRE yields considerably more successful tracings and improved handling of stenosis, calcifications, multiple vessels, and nearby bone structures. We conclude that VAMPIRE is highly suitable for automated definition of center lumen lines in vessels in cardiovascular image data. (orig.)

  14. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  15. Automated quantification of budding Saccharomyces cerevisiae using a novel image cytometry method.

    Science.gov (United States)

    Laverty, Daniel J; Kury, Alexandria L; Kuksin, Dmitry; Pirani, Alnoor; Flanagan, Kevin; Chan, Leo Li-Ying

    2013-06-01

    The measurements of concentration, viability, and budding percentages of Saccharomyces cerevisiae are performed on a routine basis in the brewing and biofuel industries. Generation of these parameters is of great importance in a manufacturing setting, where they can aid in the estimation of product quality, quantity, and fermentation time of the manufacturing process. Specifically, budding percentages can be used to estimate the reproduction rate of yeast populations, which directly correlates with metabolism of polysaccharides and bioethanol production, and can be monitored to maximize production of bioethanol during fermentation. The traditional method involves manual counting using a hemacytometer, but this is time-consuming and prone to human error. In this study, we developed a novel automated method for the quantification of yeast budding percentages using Cellometer image cytometry. The automated method utilizes a dual-fluorescent nucleic acid dye to specifically stain live cells for imaging analysis of unique morphological characteristics of budding yeast. In addition, cell cycle analysis is performed as an alternative method for budding analysis. We were able to show comparable yeast budding percentages between manual and automated counting, as well as cell cycle analysis. The automated image cytometry method is used to analyze and characterize corn mash samples directly from fermenters during standard fermentation. Since concentration, viability, and budding percentages can be obtained simultaneously, the automated method can be integrated into the fermentation quality assurance protocol, which may improve the quality and efficiency of beer and bioethanol production processes.

  16. Automated, non-linear registration between 3-dimensional brain map and medical head image

    International Nuclear Information System (INIS)

    Mizuta, Shinobu; Urayama, Shin-ichi; Zoroofi, R.A.; Uyama, Chikao

    1998-01-01

    In this paper, we propose an automated, non-linear registration method between 3-dimensional medical head image and brain map in order to efficiently extract the regions of interest. In our method, input 3-dimensional image is registered into a reference image extracted from a brain map. The problems to be solved are automated, non-linear image matching procedure, and cost function which represents the similarity between two images. Non-linear matching is carried out by dividing the input image into connected partial regions, transforming the partial regions preserving connectivity among the adjacent images, evaluating the image similarity between the transformed regions of the input image and the correspondent regions of the reference image, and iteratively searching the optimal transformation of the partial regions. In order to measure the voxelwise similarity of multi-modal images, a cost function is introduced, which is based on the mutual information. Some experiments using MR images presented the effectiveness of the proposed method. (author)

  17. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    International Nuclear Information System (INIS)

    Wells, J; Wilson, J; Zhang, Y; Samei, E; Ravin, Carl E.

    2014-01-01

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  18. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    van Netten, Jaap J.; van Baal, Jeff G.; Liu, Chanjuan; van der Heijden, Ferdi; Bus, Sicco A.

    2013-01-01

    Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the applicability

  19. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    van Netten, Jaap J.; van Baal, Jeff G.; Liu, C.; van der Heijden, Ferdinand; Bus, Sicco A.

    Background: Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the

  20. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure

  1. Automated whole animal bio-imaging assay for human cancer dissemination.

    Directory of Open Access Journals (Sweden)

    Veerander P S Ghotra

    Full Text Available A quantitative bio-imaging platform is developed for analysis of human cancer dissemination in a short-term vertebrate xenotransplantation assay. Six days after implantation of cancer cells in zebrafish embryos, automated imaging in 96 well plates coupled to image analysis algorithms quantifies spreading throughout the host. Findings in this model correlate with behavior in long-term rodent xenograft models for panels of poorly- versus highly malignant cell lines derived from breast, colorectal, and prostate cancer. In addition, cancer cells with scattered mesenchymal characteristics show higher dissemination capacity than cell types with epithelial appearance. Moreover, RNA interference establishes the metastasis-suppressor role for E-cadherin in this model. This automated quantitative whole animal bio-imaging assay can serve as a first-line in vivo screening step in the anti-cancer drug target discovery pipeline.

  2. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    NARCIS (Netherlands)

    Lee, K.; Buitendijk, G.H.; Bogunovic, H.; Springelkamp, H.; Hofman, A.; Wahle, A.; Sonka, M.; Vingerling, J.R.; Klaver, C.C.W.; Abramoff, M.D.

    2016-01-01

    PURPOSE: To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. METHODS: Six hundred ninety macular SD-OCT image volumes (6.0 x 6.0 x 2.3 mm3)

  3. An Automated Method for Semantic Classification of Regions in Coastal Images

    NARCIS (Netherlands)

    Hoonhout, B.M.; Radermacher, M.; Baart, F.; Van der Maaten, L.J.P.

    2015-01-01

    Large, long-term coastal imagery datasets are nowadays a low-cost source of information for various coastal research disciplines. However, the applicability of many existing algorithms for coastal image analysis is limited for these large datasets due to a lack of automation and robustness.

  4. Automated and unbiased image analyses as tools in phenotypic classification of small-spored Alternaria species

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn

    2005-01-01

    often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...

  5. Automated image mosaics by non-automated light microscopes: the MicroMos software tool.

    Science.gov (United States)

    Piccinini, F; Bevilacqua, A; Lucarelli, E

    2013-12-01

    Light widefield microscopes and digital imaging are the basis for most of the analyses performed in every biological laboratory. In particular, the microscope's user is typically interested in acquiring high-detailed images for analysing observed cells and tissues, meanwhile being representative of a wide area to have reliable statistics. The microscopist has to choose between higher magnification factor and extension of the observed area, due to the finite size of the camera's field of view. To overcome the need of arrangement, mosaicing techniques have been developed in the past decades for increasing the camera's field of view by stitching together more images. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Or alternatively, the methods are conceived just to provide visually pleasant mosaics not suitable for quantitative analyses. This work presents a tool for building mosaics of images acquired with nonautomated light microscopes. The method proposed is based on visual information only and the mosaics are built by incrementally stitching couples of images, making the approach available also for online applications. Seams in the stitching regions as well as tonal inhomogeneities are corrected by compensating the vignetting effect. In the experiments performed, we tested different registration approaches, confirming that the translation model is not always the best, despite the fact that the motion of the sample holder of the microscope is apparently translational and typically considered as such. The method's implementation is freely distributed as an open source tool called MicroMos. Its usability makes building mosaics of microscope images at subpixel accuracy easier. Furthermore, optional parameters for building mosaics according to different strategies make MicroMos an easy and reliable tool to compare different registration approaches, warping models and tonal corrections. © 2013 The Authors Journal of

  6. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  7. An image-processing program for automated counting

    Science.gov (United States)

    Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.

    1996-01-01

    An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

  8. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  9. Automated Analysis of Microscopic Images of Isolated Pancreatic Islets

    Czech Academy of Sciences Publication Activity Database

    Habart, D.; Švihlík, J.; Schier, Jan; Cahová, M.; Girman, P.; Zacharovová, K.; Berková, Z.; Kříž, J.; Fabryová, E.; Kosinová, L.; Papáčková, Z.; Kybic, J.; Saudek, F.

    2016-01-01

    Roč. 25, č. 12 (2016), s. 2145-2156 ISSN 0963-6897 Grant - others:GA ČR(CZ) GA14-10440S Institutional support: RVO:67985556 Keywords : enumeration of islets * image processing * image segmentation * islet transplantation * machine-learning * quality control Subject RIV: IN - Informatics, Computer Science Impact factor: 3.006, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/schier-0465945.pdf

  10. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images

    International Nuclear Information System (INIS)

    Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza

    2014-01-01

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods

  11. Normalized gradient fields cross-correlation for automated detection of prostate in magnetic resonance images

    Science.gov (United States)

    Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.

    2012-02-01

    Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.

  12. Automated tissue segmentation of MR brain images in the presence of white matter lesions.

    Science.gov (United States)

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier

    2017-01-01

    Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Fully automated registration of vibrational microspectroscopic images in histologically stained tissue sections.

    Science.gov (United States)

    Yang, Chen; Niedieker, Daniel; Grosserüschkamp, Frederik; Horn, Melanie; Tannapfel, Andrea; Kallenbach-Thieltges, Angela; Gerwert, Klaus; Mosig, Axel

    2015-11-25

    In recent years, hyperspectral microscopy techniques such as infrared or Raman microscopy have been applied successfully for diagnostic purposes. In many of the corresponding studies, it is common practice to measure one and the same sample under different types of microscopes. Any joint analysis of the two image modalities requires to overlay the images, so that identical positions in the sample are located at the same coordinate in both images. This step, commonly referred to as image registration, has typically been performed manually in the lack of established automated computational registration tools. We propose a corresponding registration algorithm that addresses this registration problem, and demonstrate the robustness of our approach in different constellations of microscopes. First, we deal with subregion registration of Fourier Transform Infrared (FTIR) microscopic images in whole-slide histopathological staining images. Second, we register FTIR imaged cores of tissue microarrays in their histopathologically stained counterparts, and finally perform registration of Coherent anti-Stokes Raman spectroscopic (CARS) images within histopathological staining images. Our validation involves a large variety of samples obtained from colon, bladder, and lung tissue on three different types of microscopes, and demonstrates that our proposed method works fully automated and highly robust in different constellations of microscopes involving diverse types of tissue samples.

  14. An Automated Platform for High-Resolution Tissue Imaging Using Nanospray Desorption Electrospray Ionization Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.; Thomas, Mathew; Carson, James P.; Laskin, Julia

    2012-10-02

    An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSI QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.

  15. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  16. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.

    Science.gov (United States)

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-02-15

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.

  17. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  18. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    Science.gov (United States)

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  19. Automation of chromosomes analysis. Automatic system for image processing

    International Nuclear Information System (INIS)

    Le Go, R.; Cosnac, B. de; Spiwack, A.

    1975-01-01

    The A.S.T.I. is an automatic system relating to the fast conversational processing of all kinds of images (cells, chromosomes) converted to a numerical data set (120000 points, 16 grey levels stored in a MOS memory) through a fast D.O. analyzer. The system performs automatically the isolation of any individual image, the area and weighted area of which are computed. These results are directly displayed on the command panel and can be transferred to a mini-computer for further computations. A bright spot allows parts of an image to be picked out and the results to be displayed. This study is particularly directed towards automatic karyo-typing [fr

  20. Automated Structure Detection in HRTEM Images: An Example with Graphene

    DEFF Research Database (Denmark)

    Kling, Jens; Vestergaard, Jacob Schack; Dahl, Anders Bjorholm

    Graphene, as the forefather of 2D-materials, attracts much attention due to its extraordinary properties like transparency, flexibility and outstanding high conductivity, together with a thickness of only one atom. The properties seem to be dependent on the atomic structure of graphene...... of time making it difficult to resolve dynamic processes or unstable structures. Tools that assist to get the maximum of information out of recorded images are therefore greatly appreciated. In order to get the most accurate results out of the structure detection, we have optimized the imaging conditions...

  1. System and method for automated object detection in an image

    Energy Technology Data Exchange (ETDEWEB)

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  2. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik

    2007-01-01

    cancer. METHODS: A total of 87 patients who underwent PET/CT examinations due to suspected lung cancer comprised the training group. The test group consisted of PET/CT images from 49 patients suspected with lung cancer. The consensus interpretations by two experienced physicians were used as the 'gold...... method measured as the area under the receiver operating characteristic curve, was 0.97 in the test group, with an accuracy of 92%. The sensitivity was 86% at a specificity of 100%. CONCLUSIONS: A completely automated method using artificial neural networks can be used to detect lung cancer......PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...

  3. Automated determination of size and morphology information from soot transmission electron microscope (TEM)-generated images

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Cheng; Chan, Qing N., E-mail: qing.chan@unsw.edu.au; Zhang, Renlin; Kook, Sanghoon; Hawkes, Evatt R.; Yeoh, Guan H. [UNSW, School of Mechanical and Manufacturing Engineering (Australia); Medwell, Paul R. [The University of Adelaide, Centre for Energy Technology (Australia)

    2016-05-15

    The thermophoretic sampling of particulates from hot media, coupled with transmission electron microscope (TEM) imaging, is a combined approach that is widely used to derive morphological information. The identification and the measurement of the particulates, however, can be complex when the TEM images are of low contrast, noisy, and have non-uniform background signal level. The image processing method can also be challenging and time consuming, when the samples collected have large variability in shape and size, or have some degree of overlapping. In this work, a three-stage image processing sequence is presented to facilitate time-efficient automated identification and measurement of particulates from the TEM grids. The proposed processing sequence is first applied to soot samples that were thermophoretically sampled from a laminar non-premixed ethylene-air flame. The parameter values that are required to be set to facilitate the automated process are identified, and sensitivity of the results to these parameters is assessed. The same analysis process is also applied to soot samples that were acquired from an externally irradiated laminar non-premixed ethylene-air flame, which have different geometrical characteristics, to assess the morphological dependence of the proposed image processing sequence. Using the optimized parameter values, statistical assessments of the automated results reveal that the largest discrepancies that are associated with the estimated values of primary particle diameter, fractal dimension, and prefactor values of the aggregates for the tested cases, are approximately 3, 1, and 10 %, respectively, when compared with the manual measurements.

  4. Automated 3-D Detection of Dendritic Spines from In Vivo Two-Photon Image Stacks.

    Science.gov (United States)

    Singh, P K; Hernandez-Herrera, P; Labate, D; Papadakis, M

    2017-10-01

    Despite the significant advances in the development of automated image analysis algorithms for the detection and extraction of neuronal structures, current software tools still have numerous limitations when it comes to the detection and analysis of dendritic spines. The problem is especially challenging in in vivo imaging, where the difficulty of extracting morphometric properties of spines is compounded by lower image resolution and contrast levels native to two-photon laser microscopy. To address this challenge, we introduce a new computational framework for the automated detection and quantitative analysis of dendritic spines in vivo multi-photon imaging. This framework includes: (i) a novel preprocessing algorithm enhancing spines in a way that they are included in the binarized volume produced during the segmentation of foreground from background; (ii) the mathematical foundation of this algorithm, and (iii) an algorithm for the detection of spine locations in reference to centerline trace and separating them from the branches to whom spines are attached to. This framework enables the computation of a wide range of geometric features such as spine length, spatial distribution and spine volume in a high-throughput fashion. We illustrate our approach for the automated extraction of dendritic spine features in time-series multi-photon images of layer 5 cortical excitatory neurons from the mouse visual cortex.

  5. Automated determination of size and morphology information from soot transmission electron microscope (TEM)-generated images

    International Nuclear Information System (INIS)

    Wang, Cheng; Chan, Qing N.; Zhang, Renlin; Kook, Sanghoon; Hawkes, Evatt R.; Yeoh, Guan H.; Medwell, Paul R.

    2016-01-01

    The thermophoretic sampling of particulates from hot media, coupled with transmission electron microscope (TEM) imaging, is a combined approach that is widely used to derive morphological information. The identification and the measurement of the particulates, however, can be complex when the TEM images are of low contrast, noisy, and have non-uniform background signal level. The image processing method can also be challenging and time consuming, when the samples collected have large variability in shape and size, or have some degree of overlapping. In this work, a three-stage image processing sequence is presented to facilitate time-efficient automated identification and measurement of particulates from the TEM grids. The proposed processing sequence is first applied to soot samples that were thermophoretically sampled from a laminar non-premixed ethylene-air flame. The parameter values that are required to be set to facilitate the automated process are identified, and sensitivity of the results to these parameters is assessed. The same analysis process is also applied to soot samples that were acquired from an externally irradiated laminar non-premixed ethylene-air flame, which have different geometrical characteristics, to assess the morphological dependence of the proposed image processing sequence. Using the optimized parameter values, statistical assessments of the automated results reveal that the largest discrepancies that are associated with the estimated values of primary particle diameter, fractal dimension, and prefactor values of the aggregates for the tested cases, are approximately 3, 1, and 10 %, respectively, when compared with the manual measurements.

  6. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    Science.gov (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  7. Automated identification of retained surgical items in radiological images

    Science.gov (United States)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  8. Transportation informatics : advanced image processing techniques automated pavement distress evaluation.

    Science.gov (United States)

    2010-01-01

    The current project, funded by MIOH-UTC for the period 1/1/2009- 4/30/2010, is concerned : with the development of the framework for a transportation facility inspection system using : advanced image processing techniques. The focus of this study is ...

  9. Automation of the method gamma of comparison dosimetry images

    International Nuclear Information System (INIS)

    Moreno Reyes, J. C.; Macias Jaen, J.; Arrans Lara, R.

    2013-01-01

    The objective of this work was the development of JJGAMMA application analysis software, which enables this task systematically, minimizing intervention specialist and therefore the variability due to the observer. Both benefits, allow comparison of images is done in practice with the required frequency and objectivity. (Author)

  10. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 21(8):761 –768, Aug. 1999. [84] N. Stamatopoulos, B. Gatos , and T. Georgiou. Automatic...2007. [85] N. Stamatopoulos, B. Gatos , and T. Georgiou. Page frame detection for double page document images. In Proc. of the 9th IAPR Int’l

  11. Unsupervised fully automated inline analysis of global left ventricular function in CINE MR imaging.

    Science.gov (United States)

    Theisen, Daniel; Sandner, Torleif A; Bauner, Kerstin; Hayes, Carmel; Rist, Carsten; Reiser, Maximilian F; Wintersperger, Bernd J

    2009-08-01

    To implement and evaluate the accuracy of unsupervised fully automated inline analysis of global ventricular function and myocardial mass (MM). To compare automated with manual segmentation in patients with cardiac disorders. In 50 patients, cine imaging of the left ventricle was performed with an accelerated retrogated steady state free precession sequence (GRAPPA; R = 2) on a 1.5 Tesla whole body scanner (MAGNETOM Avanto, Siemens Healthcare, Germany). A spatial resolution of 1.4 x 1.9 mm was achieved with a slice thickness of 8 mm and a temporal resolution of 42 milliseconds. Ventricular coverage was based on 9 to 12 short axis slices extending from the annulus of the mitral valve to the apex with 2 mm gaps. Fully automated segmentation and contouring was performed instantaneously after image acquisition. In addition to automated processing, cine data sets were also manually segmented using a semi-automated postprocessing software. Results of both methods were compared with regard to end-diastolic volume (EDV), end-systolic volume (ESV), ejection fraction (EF), and MM. A subgroup analysis was performed in patients with normal (> or =55%) and reduced EF (<55%) based on the results of the manual analysis. Thirty-two percent of patients had a reduced left ventricular EF of <55%. Volumetric results of the automated inline analysis for EDV (r = 0.96), ESV (r = 0.95), EF (r = 0.89), and MM (r = 0.96) showed high correlation with the results of manual segmentation (all P < 0.001). Head-to-head comparison did not show significant differences between automated and manual evaluation for EDV (153.6 +/- 52.7 mL vs. 149.1 +/- 48.3 mL; P = 0.05), ESV (61.6 +/- 31.0 mL vs. 64.1 +/- 31.7 mL; P = 0.08), and EF (58.0 +/- 11.6% vs. 58.6 +/- 11.6%; P = 0.5). However, differences were significant for MM (150.0 +/- 61.3 g vs. 142.4 +/- 59.0 g; P < 0.01). The standard error was 15.6 (EDV), 9.7 (ESV), 5.0 (EF), and 17.1 (mass). The mean time for manual analysis was 15 minutes

  12. Extending and applying active appearance models for automated, high precision segmentation in different image modalities

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Fisker, Rune; Ersbøll, Bjarne Kjær

    2001-01-01

    , an initialization scheme is designed thus making the usage of AAMs fully automated. Using these extensions it is demonstrated that AAMs can segment bone structures in radiographs, pork chops in perspective images and the left ventricle in cardiovascular magnetic resonance images in a robust, fast and accurate...... object class description, which can be employed to rapidly search images for new object instances. The proposed extensions concern enhanced shape representation, handling of homogeneous and heterogeneous textures, refinement optimization using Simulated Annealing and robust statistics. Finally...

  13. Automated detection of new impact sites on Martian surface from HiRISE images

    Science.gov (United States)

    Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu

    2017-10-01

    In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.

  14. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  15. Automated segmentation of pigmented skin lesions in multispectral imaging

    International Nuclear Information System (INIS)

    Carrara, Mauro; Tomatis, Stefano; Bono, Aldo; Bartoli, Cesare; Moglia, Daniele; Lualdi, Manuela; Colombo, Ambrogio; Santinami, Mario; Marchesini, Renato

    2005-01-01

    The aim of this study was to develop an algorithm for the automatic segmentation of multispectral images of pigmented skin lesions. The study involved 1700 patients with 1856 cutaneous pigmented lesions, which were analysed in vivo by a novel spectrophotometric system, before excision. The system is able to acquire a set of 15 different multispectral images at equally spaced wavelengths between 483 and 951 nm. An original segmentation algorithm was developed and applied to the whole set of lesions and was able to automatically contour them all. The obtained lesion boundaries were shown to two expert clinicians, who, independently, rejected 54 of them. The 97.1% contour accuracy indicates that the developed algorithm could be a helpful and effective instrument for the automatic segmentation of skin pigmented lesions. (note)

  16. Scoring of radiation-induced micronuclei in cytokinesis-blocked human lymphocytes by automated image analysis

    International Nuclear Information System (INIS)

    Verhaegen, F.; Seuntjens, J.; Thierens, H.

    1994-01-01

    The micronucleus assay in human lymphocytes is, at present, frequently used to assess chromosomal damage caused by ionizing radiation or mutagens. Manual scoring of micronuclei (MN) by trained personnel is very time-consuming, tiring work, and the results depend on subjective interpretation of scoring criteria. More objective scoring can be accomplished only if the test can be automated. Furthermore, an automated system allows scoring of large numbers of cells, thereby increasing the statistical significance of the results. This is of special importance for screening programs for low doses of chromosome-damaging agents. In this paper, the first results of our effort to automate the micronucleus assay with an image-analysis system are represented. The method we used is described in detail, and the results are compared to those of other groups. Our system is able to detect 88% of the binucleated lymphocytes on the slides. The procedure consists of a fully automated localization of binucleated cells and counting of the MN within these cells, followed by a simple and fast manual operation in which the false positives are removed. Preliminary measurements for blood samples irradiated with a dose of 1 Gy X-rays indicate that the automated system can find 89% ± 12% of the micronuclei within the binucleated cells compared to a manual screening. 18 refs., 8 figs., 1 tab

  17. AUTOMATED INSPECTION OF POWER LINE CORRIDORS TO MEASURE VEGETATION UNDERCUT USING UAV-BASED IMAGES

    Directory of Open Access Journals (Sweden)

    M. Maurer

    2017-08-01

    Full Text Available Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present an automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line, and on the other hand solid objects (surrounding. The automated selection is realized by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improves the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.

  18. Automation of disbond detection in aircraft fuselage through thermal image processing

    Science.gov (United States)

    Prabhu, D. R.; Winfree, W. P.

    1992-01-01

    A procedure for interpreting thermal images obtained during the nondestructive evaluation of aircraft bonded joints is presented. The procedure operates on time-derivative thermal images and resulted in a disbond image with disbonds highlighted. The size of the 'black clusters' in the output disbond image is a quantitative measure of disbond size. The procedure is illustrated using simulation data as well as data obtained through experimental testing of fabricated samples and aircraft panels. Good results are obtained, and, except in pathological cases, 'false calls' in the cases studied appeared only as noise in the output disbond image which was easily filtered out. The thermal detection technique coupled with an automated image interpretation capability will be a very fast and effective method for inspecting bonded joints in an aircraft structure.

  19. Automated gas bubble imaging at sea floor - a new method of in situ gas flux quantification

    Science.gov (United States)

    Thomanek, K.; Zielinski, O.; Sahling, H.; Bohrmann, G.

    2010-06-01

    Photo-optical systems are common in marine sciences and have been extensively used in coastal and deep-sea research. However, due to technical limitations in the past photo images had to be processed manually or semi-automatically. Recent advances in technology have rapidly improved image recording, storage and processing capabilities which are used in a new concept of automated in situ gas quantification by photo-optical detection. The design for an in situ high-speed image acquisition and automated data processing system is reported ("Bubblemeter"). New strategies have been followed with regards to back-light illumination, bubble extraction, automated image processing and data management. This paper presents the design of the novel method, its validation procedures and calibration experiments. The system will be positioned and recovered from the sea floor using a remotely operated vehicle (ROV). It is able to measure bubble flux rates up to 10 L/min with a maximum error of 33% for worst case conditions. The Bubblemeter has been successfully deployed at a water depth of 1023 m at the Makran accretionary prism offshore Pakistan during a research expedition with R/V Meteor in November 2007.

  20. An image processing framework for automated analysis of swimming behavior in tadpoles with vestibular alterations

    Science.gov (United States)

    Zarei, Kasra; Fritzsch, Bernd; Buchholz, James H. J.

    2017-03-01

    Micogravity, as experienced during prolonged space flight, presents a problem for space exploration. Animal models, specifically tadpoles, with altered connections of the vestibular ear allow the examination of the effects of microgravity and can be quantitatively monitored through tadpole swimming behavior. We describe an image analysis framework for performing automated quantification of tadpole swimming behavior. Speckle reducing anisotropic diffusion is used to smooth tadpole image signals by diffusing noise while retaining edges. A narrow band level set approach is used for sharp tracking of the tadpole body. The use of level set method for interface tracking provides an inherent advantage of using level set based image segmentation algorithm (active contouring). Active contour segmentation is followed by two-dimensional skeletonization, which allows the automated quantification of tadpole deflection angles, and subsequently tadpole escape (or C-start) response times. Evaluation of the image analysis methodology was performed by comparing the automated quantifications of deflection angles to manual assessments (obtained using a standard grading scheme), and produced a high correlation (r2 = 0.99) indicating high reliability and accuracy of the proposed method. The methods presented form an important element of objective quantification of the escape response of the tadpole vestibular system to mechanical and biochemical manipulations, and can ultimately contribute to a better understanding of the effects of altered gravity perception on humans.

  1. Automated identification of Monogeneans using digital image processing and K-nearest neighbour approaches.

    Science.gov (United States)

    Yousef Kalafi, Elham; Tan, Wooi Boon; Town, Christopher; Dhillon, Sarinder Kaur

    2016-12-22

    Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods. Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%. The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in

  2. Automated and effective content-based image retrieval for digital mammography.

    Science.gov (United States)

    Singh, Vibhav Prakash; Srivastava, Subodh; Srivastava, Rajeev

    2018-01-01

    Nowadays, huge number of mammograms has been generated in hospitals for the diagnosis of breast cancer. Content-based image retrieval (CBIR) can contribute more reliable diagnosis by classifying the query mammograms and retrieving similar mammograms already annotated by diagnostic descriptions and treatment results. Since labels, artifacts, and pectoral muscles present in mammograms can bias the retrieval procedures, automated detection and exclusion of these image noise patterns and/or non-breast regions is an essential pre-processing step. In this study, an efficient and automated CBIR system of mammograms was developed and tested. First, the pre-processing steps including automatic labelling-artifact suppression, automatic pectoral muscle removal, and image enhancement using the adaptive median filter were applied. Next, pre-processed images were segmented using the co-occurrence thresholds based seeded region growing algorithm. Furthermore, a set of image features including shape, histogram based statistical, Gabor, wavelet, and Gray Level Co-occurrence Matrix (GLCM) features, was computed from the segmented region. In order to select the optimal features, a minimum redundancy maximum relevance (mRMR) feature selection method was then applied. Finally, similar images were retrieved using Euclidean distance similarity measure. The comparative experiments conducted with reference to benchmark mammographic images analysis society (MIAS) database confirmed the effectiveness of the proposed work concerning average precision of 72% and 61.30% for normal & abnormal classes of mammograms, respectively.

  3. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    International Nuclear Information System (INIS)

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  4. Automated detection of diabetic retinopathy in retinal images

    Directory of Open Access Journals (Sweden)

    Carmen Valverde

    2016-01-01

    Full Text Available Diabetic retinopathy (DR is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  5. Automated extraction of metastatic liver cancer regions from abdominal contrast CT images

    International Nuclear Information System (INIS)

    Yamakawa, Junki; Matsubara, Hiroaki; Kimura, Shouta; Hasegawa, Junichi; Shinozaki, Kenji; Nawano, Shigeru

    2010-01-01

    In this paper, automated extraction of metastatic liver cancer regions from abdominal contrast X-ray CT images is investigated. Because even in Japan, cases of metastatic liver cancers are increased due to recent Europeanization and/or Americanization of Japanese eating habits, development of a system for computer aided diagnosis of them is strongly expected. Our automated extraction procedure consists of following four steps; liver region extraction, density transformation for enhancement of cancer regions, segmentation for obtaining candidate cancer regions, and reduction of false positives by shape feature. Parameter values used in each step of the procedure are decided based on density and shape features of typical metastatic liver cancers. In experiments using practical 20 cases of metastatic liver tumors, it is shown that 56% of true cancers can be detected successfully from CT images by the proposed procedure. (author)

  6. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    Directory of Open Access Journals (Sweden)

    Phlypo Ronald

    2010-01-01

    Full Text Available We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  7. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    Science.gov (United States)

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  8. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    Science.gov (United States)

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  9. Automated Image-Based Procedures for Adaptive Radiotherapy

    DEFF Research Database (Denmark)

    Bjerre, Troels

    Fractionated radiotherapy for cancer treatment is a field of constant innovation. Developments in dose delivery techniques have made it possible to precisely direct ionizing radiation at complicated targets. In order to further increase tumour control probability (TCP) and decrease normal...... to encourage bone rigidity and local tissue volume change only in the gross tumour volume and the lungs. This is highly relevant in adaptive radiotherapy when modelling significant tumour volume changes. - It is described how cone beam CT reconstruction can be modelled as a deformation of a planning CT scan...... be employed for contour propagation in adaptive radiotherapy. - MRI-radiotherapy devices have the potential to offer near real-time intrafraction imaging without any additional ionising radiation. It is detailed how the use of multiple, orthogonal slices can form the basis for reliable 3D soft tissue tracking....

  10. The impact of air pollution on the level of micronuclei measured by automated image analysis

    Czech Academy of Sciences Publication Activity Database

    Rössnerová, Andrea; Špátová, Milada; Rossner, P.; Solanský, I.; Šrám, Radim

    2009-01-01

    Roč. 669, 1-2 (2009), s. 42-47 ISSN 0027-5107 R&D Projects: GA AV ČR 1QS500390506; GA MŠk 2B06088; GA MŠk 2B08005 Institutional research plan: CEZ:AV0Z50390512 Keywords : micronuclei * binucleated cells * automated image analysis Subject RIV: DN - Health Impact of the Environment Quality Impact factor: 3.556, year: 2009

  11. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  12. Development of a methodology for automated assessment of the quality of digitized images in mammography

    International Nuclear Information System (INIS)

    Santana, Priscila do Carmo

    2010-01-01

    The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. The purpose of this study is to develop a computational methodology to automate the process of assessing the quality of mammography images through techniques of digital imaging processing (PDI), using an existing image processing environment (ImageJ). With the application of PDI techniques was possible to extract geometric and radiometric characteristics of the images evaluated. The evaluated parameters include spatial resolution, high-contrast detail, low contrast threshold, linear detail of low contrast, tumor masses, contrast ratio and background optical density. The results obtained by this method were compared with the results presented in the visual evaluations performed by the Health Surveillance of Minas Gerais. Through this comparison was possible to demonstrate that the automated methodology is presented as a promising alternative for the reduction or elimination of existing subjectivity in the visual assessment methodology currently in use. (author)

  13. Automated seed detection and three-dimensional reconstruction. I. Seed localization from fluoroscopic images or radiographs

    International Nuclear Information System (INIS)

    Tubic, Dragan; Zaccarin, Andre; Pouliot, Jean; Beaulieu, Luc

    2001-01-01

    An automated procedure for the detection of the position and the orientation of radioactive seeds on fluoroscopic images or scanned radiographs is presented. The extracted positions of seed centers and the orientations are used for three-dimensional reconstruction of permanent prostate implants. The extraction procedure requires several steps: correction of image intensifier distortions, normalization, background removal, automatic threshold selection, thresholding, and finally, moment analysis and classification of the connected components. The algorithm was tested on 75 fluoroscopic images. The results show that, on average, 92% of the seeds are detected automatically. The orientation is found with an error smaller than 5 deg. for 75% of the seeds. The orientation of overlapping seeds (10%) should be considered as an estimate at best. The image processing procedure can also be used for seed or catheter detection in CT images, with minor modifications

  14. Automated Region of Interest Retrieval of Metallographic Images for Quality Classification in Industry

    Directory of Open Access Journals (Sweden)

    Petr Kotas

    2012-01-01

    Full Text Available The aim of the research is development and testing of new methods to classify the quality of metallographic samples of steels with high added value (for example grades X70 according API. In this paper, we address the development of methods to classify the quality of slab samples images with the main emphasis on the quality of the image center called as segregation area. For this reason, we introduce an alternative method for automated retrieval of region of interest. In the first step, the metallographic image is segmented using both spectral method and thresholding. Then, the extracted macrostructure of the metallographic image is automatically analyzed by statistical methods. Finally, automatically extracted region of interests are compared with results of human experts.  Practical experience with retrieval of non-homogeneous noised digital images in industrial environment is discussed as well.

  15. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    Science.gov (United States)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  16. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  17. Automated construction of arterial and venous trees in retinal images.

    Science.gov (United States)

    Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K

    2015-10-01

    While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input.

  18. Automated color classification of urine dipstick image in urine examination

    Science.gov (United States)

    Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.

    2018-03-01

    Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.

  19. Automated construction of arterial and venous trees in retinal images

    Science.gov (United States)

    Hu, Qiao; Abràmoff, Michael D.; Garvin, Mona K.

    2015-01-01

    Abstract. While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input. PMID:26636114

  20. Automated vehicle counting using image processing and machine learning

    Science.gov (United States)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  1. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    Science.gov (United States)

    Singh, Preetpal

    Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing

  2. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  3. Automated recognition and characterization of solar active regions based on the SOHO/MDI images

    Science.gov (United States)

    Pap, J. M.; Turmon, M.; Mukhtar, S.; Bogart, R.; Ulrich, R.; Froehlich, C.; Wehrli, C.

    1997-01-01

    The first results of a new method to identify and characterize the various surface structures on the sun, which may contribute to the changes in solar total and spectral irradiance, are shown. The full disk magnetograms (1024 x 1024 pixels) of the Michelson Doppler Imager (MDI) experiment onboard SOHO are analyzed. Use of a Bayesian inference scheme allows objective, uniform, automated processing of a long sequence of images. The main goal is to identify the solar magnetic features causing irradiance changes. The results presented are based on a pilot time interval of August 1996.

  4. An Automated Reference Frame Selection (ARFS) Algorithm for Cone Imaging with Adaptive Optics Scanning Light Ophthalmoscopy.

    Science.gov (United States)

    Salmon, Alexander E; Cooper, Robert F; Langlo, Christopher S; Baghaie, Ahmadreza; Dubra, Alfredo; Carroll, Joseph

    2017-04-01

    To develop an automated reference frame selection (ARFS) algorithm to replace the subjective approach of manually selecting reference frames for processing adaptive optics scanning light ophthalmoscope (AOSLO) videos of cone photoreceptors. Relative distortion was measured within individual frames before conducting image-based motion tracking and sorting of frames into distinct spatial clusters. AOSLO images from nine healthy subjects were processed using ARFS and human-derived reference frames, then aligned to undistorted AO-flood images by nonlinear registration and the registration transformations were compared. The frequency at which humans selected reference frames that were rejected by ARFS was calculated in 35 datasets from healthy subjects, and subjects with achromatopsia, albinism, or retinitis pigmentosa. The level of distortion in this set of human-derived reference frames was assessed. The average transformation vector magnitude required for registration of AOSLO images to AO-flood images was significantly reduced from 3.33 ± 1.61 pixels when using manual reference frame selection to 2.75 ± 1.60 pixels (mean ± SD) when using ARFS ( P = 0.0016). Between 5.16% and 39.22% of human-derived frames were rejected by ARFS. Only 2.71% to 7.73% of human-derived frames were ranked in the top 5% of least distorted frames. ARFS outperforms expert observers in selecting minimally distorted reference frames in AOSLO image sequences. The low success rate in human frame choice illustrates the difficulty in subjectively assessing image distortion. Manual reference frame selection represented a significant barrier to a fully automated image-processing pipeline (including montaging, cone identification, and metric extraction). The approach presented here will aid in the clinical translation of AOSLO imaging.

  5. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis.

    Directory of Open Access Journals (Sweden)

    Jean-Claude Gilhodes

    Full Text Available Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg. A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001 was found between automated analysis and the above standard evaluation methods. This correlation

  6. Automated low-contrast pattern recognition algorithm for magnetic resonance image quality assessment.

    Science.gov (United States)

    Ehman, Morgan O; Bao, Zhonghao; Stiving, Scott O; Kasam, Mallik; Lanners, Dianna; Peterson, Teresa; Jonsgaard, Renee; Carter, Rickey; McGee, Kiaran P

    2017-08-01

    Low contrast (LC) detectability is a common test criterion for diagnostic radiologic quality control (QC) programs. Automation of this test is desirable in order to reduce human variability and to speed up analysis. However, automation is challenging due to the complexity of the human visual perception system and the ability to create algorithms that mimic this response. This paper describes the development and testing of an automated LC detection algorithm for use in the analysis of magnetic resonance (MR) images of the American College of Radiology (ACR) QC phantom. The detection algorithm includes fuzzy logic decision processes and various edge detection methods to quantify LC detectability. Algorithm performance was first evaluated using a single LC phantom MR image with the addition of incremental zero mean Gaussian noise resulting in a total of 200 images. A c-statistic was calculated to determine the role of CNR to indicate when the algorithm would detect ten spokes. To evaluate inter-rater agreement between experienced observers and the algorithm, a blinded observer study was performed on 196 LC phantom images acquired from nine clinical MR scanners. The nine scanners included two MR manufacturers and two field strengths (1.5 T, 3.0 T). Inter-rater and algorithm-rater agreement was quantified using Krippendorff's alpha. For the Gaussian noise added data, CNR ranged from 0.519 to 11.7 with CNR being considered an excellent discriminator of algorithm performance (c-statistic = 0.9777). Reviewer scoring of the clinical phantom data resulted in an inter-rater agreement of 0.673 with the agreement between observers and algorithm equal to 0.652, both of which indicate significant agreement. This study demonstrates that the detection of LC test patterns for MR imaging QC programs can be successfully developed and that their response can model the human visual detection system of expert MR QC readers. © 2017 American Association of Physicists in Medicine.

  7. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis.

    Science.gov (United States)

    Gilhodes, Jean-Claude; Julé, Yvon; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz

    2017-01-01

    Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (pfibrosis in mice, which will be very valuable for future preclinical drug explorations.

  8. An automated algorithm for photoreceptors counting in adaptive optics retinal images

    Science.gov (United States)

    Liu, Xu; Zhang, Yudong; Yun, Dai

    2012-10-01

    Eyes are important organs of humans that detect light and form spatial and color vision. Knowing the exact number of cones in retinal image has great importance in helping us understand the mechanism of eyes' function and the pathology of some eye disease. In order to analyze data in real time and process large-scale data, an automated algorithm is designed to label cone photoreceptors in adaptive optics (AO) retinal images. Images acquired by the flood-illuminated AO system are taken to test the efficiency of this algorithm. We labeled these images both automatically and manually, and compared the results of the two methods. A 94.1% to 96.5% agreement rate between the two methods is achieved in this experiment, which demonstrated the reliability and efficiency of the algorithm.

  9. SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories

    Science.gov (United States)

    Zhang, M.; Collioud, A.; Charlot, P.

    2018-02-01

    We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.

  10. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    Science.gov (United States)

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  11. High-resolution imaging optomechatronics for precise liquid crystal display module bonding automated optical inspection

    Science.gov (United States)

    Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong

    2018-01-01

    With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.

  12. Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey

    Science.gov (United States)

    Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.

    2018-01-01

    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.

  13. An Automated MR Image Segmentation System Using Multi-layer Perceptron Neural Network

    Directory of Open Access Journals (Sweden)

    Amiri S

    2013-12-01

    Full Text Available Background: Brain tissue segmentation for delineation of 3D anatomical structures from magnetic resonance (MR images can be used for neuro-degenerative disorders, characterizing morphological differences between subjects based on volumetric analysis of gray matter (GM, white matter (WM and cerebrospinal fluid (CSF, but only if the obtained segmentation results are correct. Due to image artifacts such as noise, low contrast and intensity non-uniformity, there are some classifcation errors in the results of image segmentation. Objective: An automated algorithm based on multi-layer perceptron neural networks (MLPNN is presented for segmenting MR images. The system is to identify two tissues of WM and GM in human brain 2D structural MR images. A given 2D image is processed to enhance image intensity and to remove extra cerebral tissue. Thereafter, each pixel of the image under study is represented using 13 features (8 statistical and 5 non- statistical features and is classifed using a MLPNN into one of the three classes WM and GM or unknown. Results: The developed MR image segmentation algorithm was evaluated using 20 real images. Training using only one image, the system showed robust performance when tested using the remaining 19 images. The average Jaccard similarity index and Dice similarity metric for the GM and WM tissues were estimated to be 75.7 %, 86.0% for GM, and 67.8% and 80.7%for WM, respectively. Conclusion: The obtained performances are encouraging and show that the presented method may assist with segmentation of 2D MR images especially where categorizing WM and GM is of interest.

  14. Automated Adaptive Brightness in Wireless Capsule Endoscopy Using Image Segmentation and Sigmoid Function.

    Science.gov (United States)

    Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A

    2016-08-01

    Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.

  15. An Automated MR Image Segmentation System Using Multi-layer Perceptron Neural Network.

    Science.gov (United States)

    Amiri, S; Movahedi, M M; Kazemi, K; Parsaei, H

    2013-12-01

    Brain tissue segmentation for delineation of 3D anatomical structures from magnetic resonance (MR) images can be used for neuro-degenerative disorders, characterizing morphological differences between subjects based on volumetric analysis of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF), but only if the obtained segmentation results are correct. Due to image artifacts such as noise, low contrast and intensity non-uniformity, there are some classification errors in the results of image segmentation. An automated algorithm based on multi-layer perceptron neural networks (MLPNN) is presented for segmenting MR images. The system is to identify two tissues of WM and GM in human brain 2D structural MR images. A given 2D image is processed to enhance image intensity and to remove extra cerebral tissue. Thereafter, each pixel of the image under study is represented using 13 features (8 statistical and 5 non- statistical features) and is classified using a MLPNN into one of the three classes WM and GM or unknown. The developed MR image segmentation algorithm was evaluated using 20 real images. Training using only one image, the system showed robust performance when tested using the remaining 19 images. The average Jaccard similarity index and Dice similarity metric for the GM and WM tissues were estimated to be 75.7 %, 86.0% for GM, and 67.8% and 80.7%for WM, respectively. The obtained performances are encouraging and show that the presented method may assist with segmentation of 2D MR images especially where categorizing WM and GM is of interest.

  16. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  17. Microscope image based fully automated stomata detection and pore measurement method for grapevines

    Directory of Open Access Journals (Sweden)

    Hiranya Jayakody

    2017-11-01

    Full Text Available Abstract Background Stomatal behavior in grapevines has been identified as a good indicator of the water stress level and overall health of the plant. Microscope images are often used to analyze stomatal behavior in plants. However, most of the current approaches involve manual measurement of stomatal features. The main aim of this research is to develop a fully automated stomata detection and pore measurement method for grapevines, taking microscope images as the input. The proposed approach, which employs machine learning and image processing techniques, can outperform available manual and semi-automatic methods used to identify and estimate stomatal morphological features. Results First, a cascade object detection learning algorithm is developed to correctly identify multiple stomata in a large microscopic image. Once the regions of interest which contain stomata are identified and extracted, a combination of image processing techniques are applied to estimate the pore dimensions of the stomata. The stomata detection approach was compared with an existing fully automated template matching technique and a semi-automatic maximum stable extremal regions approach, with the proposed method clearly surpassing the performance of the existing techniques with a precision of 91.68% and an F1-score of 0.85. Next, the morphological features of the detected stomata were measured. Contrary to existing approaches, the proposed image segmentation and skeletonization method allows us to estimate the pore dimensions even in cases where the stomatal pore boundary is only partially visible in the microscope image. A test conducted using 1267 images of stomata showed that the segmentation and skeletonization approach was able to correctly identify the stoma opening 86.27% of the time. Further comparisons made with manually traced stoma openings indicated that the proposed method is able to estimate stomata morphological features with accuracies of 89.03% for area

  18. Microscope image based fully automated stomata detection and pore measurement method for grapevines.

    Science.gov (United States)

    Jayakody, Hiranya; Liu, Scarlett; Whitty, Mark; Petrie, Paul

    2017-01-01

    Stomatal behavior in grapevines has been identified as a good indicator of the water stress level and overall health of the plant. Microscope images are often used to analyze stomatal behavior in plants. However, most of the current approaches involve manual measurement of stomatal features. The main aim of this research is to develop a fully automated stomata detection and pore measurement method for grapevines, taking microscope images as the input. The proposed approach, which employs machine learning and image processing techniques, can outperform available manual and semi-automatic methods used to identify and estimate stomatal morphological features. First, a cascade object detection learning algorithm is developed to correctly identify multiple stomata in a large microscopic image. Once the regions of interest which contain stomata are identified and extracted, a combination of image processing techniques are applied to estimate the pore dimensions of the stomata. The stomata detection approach was compared with an existing fully automated template matching technique and a semi-automatic maximum stable extremal regions approach, with the proposed method clearly surpassing the performance of the existing techniques with a precision of 91.68% and an F1-score of 0.85. Next, the morphological features of the detected stomata were measured. Contrary to existing approaches, the proposed image segmentation and skeletonization method allows us to estimate the pore dimensions even in cases where the stomatal pore boundary is only partially visible in the microscope image. A test conducted using 1267 images of stomata showed that the segmentation and skeletonization approach was able to correctly identify the stoma opening 86.27% of the time. Further comparisons made with manually traced stoma openings indicated that the proposed method is able to estimate stomata morphological features with accuracies of 89.03% for area, 94.06% for major axis length, 93.31% for minor

  19. An Automated Tracking Approach for Extraction of Retinal Vasculature in Fundus Images

    Directory of Open Access Journals (Sweden)

    Alireza Osareh

    2010-01-01

    Full Text Available Purpose: To present a novel automated method for tracking and detection of retinal blood vessels in fundus images. Methods: For every pixel in retinal images, a feature vector was computed utilizing multiscale analysis based on Gabor filters. To classify the pixels based on their extracted features as vascular or non-vascular, various classifiers including Quadratic Gaussian (QG, K-Nearest Neighbors (KNN, and Neural Networks (NN were investigated. The accuracy of classifiers was evaluated using Receiver Operating Characteristic (ROC curve analysis in addition to sensitivity and specificity measurements. We opted for an NN model due to its superior performance in classification of retinal pixels as vascular and non-vascular. Results: The proposed method achieved an overall accuracy of 96.9%, sensitivity of 96.8%, and specificity of 97.3% for identification of retinal blood vessels using a dataset of 40 images. The area under the ROC curve reached a value of 0.967. Conclusion: Automated tracking and identification of retinal blood vessels based on Gabor filters and neural network classifiers seems highly successful. Through a comprehensive optimization process of operational parameters, our proposed scheme does not require any user intervention and has consistent performance for both normal and abnormal images.

  20. Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images

    Science.gov (United States)

    Jiang, Luan; Ling, Shan; Li, Qiang

    2016-03-01

    Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.

  1. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  2. AUTOMATED DETECTION OF OIL DEPOTS FROM HIGH RESOLUTION IMAGES: A NEW PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    A. O. Ok

    2015-03-01

    Full Text Available This paper presents an original approach to identify oil depots from single high resolution aerial/satellite images in an automated manner. The new approach considers the symmetric nature of circular oil depots, and it computes the radial symmetry in a unique way. An automated thresholding method to focus on circular regions and a new measure to verify circles are proposed. Experiments are performed on six GeoEye-1 test images. Besides, we perform tests on 16 Google Earth images of an industrial test site acquired in a time series manner (between the years 1995 and 2012. The results reveal that our approach is capable of detecting circle objects in very different/difficult images. We computed an overall performance of 95.8% for the GeoEye-1 dataset. The time series investigation reveals that our approach is robust enough to locate oil depots in industrial environments under varying illumination and environmental conditions. The overall performance is computed as 89.4% for the Google Earth dataset, and this result secures the success of our approach compared to a state-of-the-art approach.

  3. Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images

    Science.gov (United States)

    Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel

    2016-02-01

    Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.

  4. A Container Horizontal Positioning Method with Image Sensors for Cranes in Automated Container Terminals

    Directory of Open Access Journals (Sweden)

    FU Yonghua

    2014-03-01

    Full Text Available Automation is a trend for large container terminals nowadays, and container positioning techniques are key factor in the automating process. Vision based positioning techniques are inexpensive and rather accurate in nature, while the effect with insufficient illumination is left in question. This paper proposed a vision-based procedure with image sensors to determine the position of one container in the horizontal plane. The points found by the edge detection operator are clustered, and only the peak points in the parameter space of the Hough transformation is selected, in order that the effect of noises could be much decreased. The effectiveness of our procedure is verified in experiments, in which the efficiency of the procedure is also investigated.

  5. Automated identification of copepods using digital image processing and artificial neural network.

    Science.gov (United States)

    Leow, Lee Kien; Chew, Li-Lee; Chong, Ving Ching; Dhillon, Sarinder Kaur

    2015-01-01

    Copepods are planktonic organisms that play a major role in the marine food chain. Studying the community structure and abundance of copepods in relation to the environment is essential to evaluate their contribution to mangrove trophodynamics and coastal fisheries. The routine identification of copepods can be very technical, requiring taxonomic expertise, experience and much effort which can be very time-consuming. Hence, there is an urgent need to introduce novel methods and approaches to automate identification and classification of copepod specimens. This study aims to apply digital image processing and machine learning methods to build an automated identification and classification technique. We developed an automated technique to extract morphological features of copepods' specimen from captured images using digital image processing techniques. An Artificial Neural Network (ANN) was used to classify the copepod specimens from species Acartia spinicauda, Bestiolina similis, Oithona aruensis, Oithona dissimilis, Oithona simplex, Parvocalanus crassirostris, Tortanus barbatus and Tortanus forcipatus based on the extracted features. 60% of the dataset was used for a two-layer feed-forward network training and the remaining 40% was used as testing dataset for system evaluation. Our approach demonstrated an overall classification accuracy of 93.13% (100% for A. spinicauda, B. similis and O. aruensis, 95% for T. barbatus, 90% for O. dissimilis and P. crassirostris, 85% for O. similis and T. forcipatus). The methods presented in this study enable fast classification of copepods to the species level. Future studies should include more classes in the model, improving the selection of features, and reducing the time to capture the copepod images.

  6. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    Science.gov (United States)

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects

  7. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  8. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Ani eEloyan

    2012-08-01

    Full Text Available Successful automated diagnoses of attention deficit hyperactive disorder (ADHD using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions, CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  9. SIFT optimization and automation for matching images from multiple temporal sources

    Science.gov (United States)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  10. Technique for Automated Recognition of Sunspots on Full-Disk Solar Images

    Directory of Open Access Journals (Sweden)

    Zharkov S

    2005-01-01

    Full Text Available A new robust technique is presented for automated identification of sunspots on full-disk white-light (WL solar images obtained from SOHO/MDI instrument and Ca II K1 line images from the Meudon Observatory. Edge-detection methods are applied to find sunspot candidates followed by local thresholding using statistical properties of the region around sunspots. Possible initial oversegmentation of images is remedied with a median filter. The features are smoothed by using morphological closing operations and filled by applying watershed, followed by dilation operator to define regions of interest containing sunspots. A number of physical and geometrical parameters of detected sunspot features are extracted and stored in a relational database along with umbra-penumbra information in the form of pixel run-length data within a bounding rectangle. The detection results reveal very good agreement with the manual synoptic maps and a very high correlation with those produced manually by NOAA Observatory, USA.

  11. Automated detection of acute haemorrhagic stroke in non-contrasted CT images

    International Nuclear Information System (INIS)

    Meetz, K.; Buelow, T.

    2007-01-01

    An efficient treatment of stroke patients implies a profound differential diagnosis that includes the detection of acute haematoma. The proposed approach provides an automated detection of acute haematoma, assisting the non-stroke expert in interpreting non-contrasted CT images. It consists of two steps: First, haematoma candidates are detected applying multilevel region growing approach based on a typical grey value characteristic. Second, true haematomas are differentiated from partial volume artefacts, relying on spatial features derived from distance-based histograms. This approach achieves a specificity of 77% and a sensitivity of 89.7% in detecting acute haematoma in non-contrasted CT images when applied to a set of 25 non-contrasted CT images. (orig.)

  12. Automated otolith image classification with multiple views: an evaluation on Sciaenidae.

    Science.gov (United States)

    Wong, J Y; Chu, C; Chong, V C; Dhillon, S K; Loh, K H

    2016-08-01

    Combined multiple 2D views (proximal, anterior and ventral aspects) of the sagittal otolith are proposed here as a method to capture shape information for fish classification. Classification performance of single view compared with combined 2D views show improved classification accuracy of the latter, for nine species of Sciaenidae. The effects of shape description methods (shape indices, Procrustes analysis and elliptical Fourier analysis) on classification performance were evaluated. Procrustes analysis and elliptical Fourier analysis perform better than shape indices when single view is considered, but all perform equally well with combined views. A generic content-based image retrieval (CBIR) system that ranks dissimilarity (Procrustes distance) of otolith images was built to search query images without the need for detailed information of side (left or right), aspect (proximal or distal) and direction (positive or negative) of the otolith. Methods for the development of this automated classification system are discussed. © 2016 The Fisheries Society of the British Isles.

  13. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya

    DEFF Research Database (Denmark)

    Juul Bøgelund Hansen, Morten; Abramoff, M. D.; Folk, J. C.

    2015-01-01

    Objective Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased...... gave an AUC of 0.878 (95% CI 0.850-0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment. Conclusions In this epidemiological sample, the IDP's grading was comparable...... workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields...

  14. Automated system for acquisition and image processing for the control and monitoring boned nopal

    Science.gov (United States)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  15. High-Throughput Light Sheet Microscopy for the Automated Live Imaging of Larval Zebrafish

    Science.gov (United States)

    Baker, Ryan; Logan, Savannah; Dudley, Christopher; Parthasarathy, Raghuveer

    The zebrafish is a model organism with a variety of useful properties; it is small and optically transparent, it reproduces quickly, it is a vertebrate, and there are a large variety of transgenic animals available. Because of these properties, the zebrafish is well suited to study using a variety of optical technologies including light sheet fluorescence microscopy (LSFM), which provides high-resolution three-dimensional imaging over large fields of view. Research progress, however, is often not limited by optical techniques but instead by the number of samples one can examine over the course of an experiment, which in the case of light sheet imaging has so far been severely limited. Here we present an integrated fluidic circuit and microscope which provides rapid, automated imaging of zebrafish using several imaging modes, including LSFM, Hyperspectral Imaging, and Differential Interference Contrast Microscopy. Using this system, we show that we can increase our imaging throughput by a factor of 10 compared to previous techniques. We also show preliminary results visualizing zebrafish immune response, which is sensitive to gut microbiota composition, and which shows a strong variability between individuals that highlights the utility of high throughput imaging. National Science Foundation, Award No. DBI-1427957.

  16. The use of the Kalman filter in the automated segmentation of EIT lung images.

    Science.gov (United States)

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  17. Autoscope: automated otoscopy image analysis to diagnose ear pathology and use of clinically motivated eardrum features

    Science.gov (United States)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yu, Lianbo; Gurcan, Metin

    2017-03-01

    In this study, we propose an automated otoscopy image analysis system called Autoscope. To the best of our knowledge, Autoscope is the first system designed to detect a wide range of eardrum abnormalities by using high-resolution otoscope images and report the condition of the eardrum as "normal" or "abnormal." In order to achieve this goal, first, we developed a preprocessing step to reduce camera-specific problems, detect the region of interest in the image, and prepare the image for further analysis. Subsequently, we designed a new set of clinically motivated eardrum features (CMEF). Furthermore, we evaluated the potential of the visual MPEG-7 descriptors for the task of tympanic membrane image classification. Then, we fused the information extracted from the CMEF and state-of-the-art computer vision features (CVF), which included MPEG-7 descriptors and two additional features together, using a state of the art classifier. In our experiments, 247 tympanic membrane images with 14 different types of abnormality were used, and Autoscope was able to classify the given tympanic membrane images as normal or abnormal with 84.6% accuracy.

  18. automated image analysis system for homogeneity evaluation of nuclear fuel plates

    International Nuclear Information System (INIS)

    Hassan, A.H.H.

    2005-01-01

    the main aim of this work is to design an automated image analysis system developed for inspection of fuel plates manufactured for the operation of ETRR-2 of egypt. the proposed system aims to evaluate homogeneity of the core of the fuel plate, and detecting white spot outside the fuel core. A vision system has been introduced to capture images for plates to be characterized and software has been developed to analyze the captured images based on the gray level co-occurrence matrix (GLCM). the images are digitized using digital camera. it is common practice to adopt a preprocessing step for the images with a special purpose of reduction/eliminating the noise. two preprocessing steps are carried out, application of a median type low pass filter and contrast improvement by extending the image's histogram. the analysis of texture features of co-occurrence matrix (COM) is a good tool to investigate the identification of fuel plates images based on different structures of COM considering neighbouring distance, direction

  19. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    Science.gov (United States)

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  20. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

    Science.gov (United States)

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-01

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434

  1. Automated arteriole and venule classification using deep learning for retinal images from the UK Biobank cohort.

    Science.gov (United States)

    Welikala, R A; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2017-11-01

    The morphometric characteristics of the retinal vasculature are associated with future risk of many systemic and vascular diseases. However, analysis of data from large population based studies is needed to help resolve uncertainties in some of these associations. This requires automated systems that extract quantitative measures of vessel morphology from large numbers of retinal images. Associations between retinal vessel morphology and disease precursors/outcomes may be similar or opposing for arterioles and venules. Therefore, the accurate detection of the vessel type is an important element in such automated systems. This paper presents a deep learning approach for the automatic classification of arterioles and venules across the entire retinal image, including vessels located at the optic disc. This comprises of a convolutional neural network whose architecture contains six learned layers: three convolutional and three fully-connected. Complex patterns are automatically learnt from the data, which avoids the use of hand crafted features. The method is developed and evaluated using 835,914 centreline pixels derived from 100 retinal images selected from the 135,867 retinal images obtained at the UK Biobank (large population-based cohort study of middle aged and older adults) baseline examination. This is a challenging dataset in respect to image quality and hence arteriole/venule classification is required to be highly robust. The method achieves a significant increase in accuracy of 8.1% when compared to the baseline method, resulting in an arteriole/venule classification accuracy of 86.97% (per pixel basis) over the entire retinal image. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Automated detection and tracking of many cells by using 4D live-cell imaging data.

    Science.gov (United States)

    Tokunaga, Terumasa; Hirose, Osamu; Kawaguchi, Shotaro; Toyoshima, Yu; Teramoto, Takayuki; Ikebata, Hisaki; Kuge, Sayuri; Ishihara, Takeshi; Iino, Yuichi; Yoshida, Ryo

    2014-06-15

    Automated fluorescence microscopes produce massive amounts of images observing cells, often in four dimensions of space and time. This study addresses two tasks of time-lapse imaging analyses; detection and tracking of the many imaged cells, and it is especially intended for 4D live-cell imaging of neuronal nuclei of Caenorhabditis elegans. The cells of interest appear as slightly deformed ellipsoidal forms. They are densely distributed, and move rapidly in a series of 3D images. Thus, existing tracking methods often fail because more than one tracker will follow the same target or a tracker transits from one to other of different targets during rapid moves. The present method begins by performing the kernel density estimation in order to convert each 3D image into a smooth, continuous function. The cell bodies in the image are assumed to lie in the regions near the multiple local maxima of the density function. The tasks of detecting and tracking the cells are then addressed with two hill-climbing algorithms. The positions of the trackers are initialized by applying the cell-detection method to an image in the first frame. The tracking method keeps attacking them to near the local maxima in each subsequent image. To prevent the tracker from following multiple cells, we use a Markov random field (MRF) to model the spatial and temporal covariation of the cells and to maximize the image forces and the MRF-induced constraint on the trackers. The tracking procedure is demonstrated with dynamic 3D images that each contain >100 neurons of C.elegans. http://daweb.ism.ac.jp/yoshidalab/crest/ismb2014 SUPPLEMENTARY INFORMATION: Supplementary data are available at http://daweb.ism.ac.jp/yoshidalab/crest/ismb2014 © The Author 2014. Published by Oxford University Press.

  3. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    Science.gov (United States)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  4. Automated measurement of CT noise in patient images with a novel structure coherence feature

    International Nuclear Information System (INIS)

    Chun, Minsoo; Kim, Jong Hyo; Choi, Young Hun

    2015-01-01

    While the assessment of CT noise constitutes an important task for the optimization of scan protocols in clinical routine, the majority of noise measurements in practice still rely on manual operation, hence limiting their efficiency and reliability. This study presents an algorithm for the automated measurement of CT noise in patient images with a novel structure coherence feature. The proposed algorithm consists of a four-step procedure including subcutaneous fat tissue selection, the calculation of structure coherence feature, the determination of homogeneous ROIs, and the estimation of the average noise level. In an evaluation with 94 CT scans (16 517 images) of pediatric and adult patients along with the participation of two radiologists, ROIs were placed on a homogeneous fat region at 99.46% accuracy, and the agreement of the automated noise measurements with the radiologists’ reference noise measurements (PCC  =  0.86) was substantially higher than the within and between-rater agreements of noise measurements (PCC within   =  0.75, PCC between   =  0.70). In addition, the absolute noise level measurements matched closely the theoretical noise levels generated by a reduced-dose simulation technique. Our proposed algorithm has the potential to be used for examining the appropriateness of radiation dose and the image quality of CT protocols for research purposes as well as clinical routine. (paper)

  5. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging.

    Science.gov (United States)

    Jenkins, Cesare H; Naczynski, Dominik J; Yu, Shu-Jung S; Yang, Yong; Xing, Lei

    2016-09-07

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system's unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  6. Automated structural imaging analysis detects premanifest Huntington's disease neurodegeneration within 1 year.

    Science.gov (United States)

    Majid, D S Adnan; Stoffers, Diederick; Sheldon, Sarah; Hamza, Samar; Thompson, Wesley K; Goldstein, Jody; Corey-Bloom, Jody; Aron, Adam R

    2011-07-01

    Intense efforts are underway to evaluate neuroimaging measures as biomarkers for neurodegeneration in premanifest Huntington's disease (preHD). We used a completely automated longitudinal analysis method to compare structural scans in preHD individuals and controls. Using a 1-year longitudinal design, we analyzed T(1) -weighted structural scans in 35 preHD individuals and 22 age-matched controls. We used the SIENA (Structural Image Evaluation, using Normalization, of Atrophy) software tool to yield overall percentage brain volume change (PBVC) and voxel-level changes in atrophy. We calculated sample sizes for a hypothetical disease-modifying (neuroprotection) study. We found significantly greater yearly atrophy in preHD individuals versus controls (mean PBVC controls, -0.149%; preHD, -0.388%; P = .031, Cohen's d = .617). For a preHD subgroup closest to disease onset, yearly atrophy was more than 3 times that of controls (mean PBVC close-to-onset preHD, -0.510%; P = .019, Cohen's d = .920). This atrophy was evident at the voxel level in periventricular regions, consistent with well-established preHD basal ganglia atrophy. We estimated that a neuroprotection study using SIENA would only need 74 close-to-onset individuals in each arm (treatment vs placebo) to detect a 50% slowing in yearly atrophy with 80% power. Automated whole-brain analysis of structural MRI can reliably detect preHD disease progression in 1 year. These results were attained with a readily available imaging analysis tool, SIENA, which is observer independent, automated, and robust with respect to image quality, slice thickness, and different pulse sequences. This MRI biomarker approach could be used to evaluate neuroprotection in preHD. Copyright © 2011 Movement Disorder Society.

  7. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  8. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  9. Automated registration of freehand B-mode ultrasound and magnetic resonance imaging of the carotid arteries based on geometric features

    DEFF Research Database (Denmark)

    Carvalho, Diego D. B.; Arias Lorza, Andres Mauricio; Niessen, Wiro J.

    2017-01-01

    An automated method for registering B-mode ultrasound (US) and magnetic resonance imaging (MRI) of the carotid arteries is proposed. The registration uses geometric features, namely, lumen centerlines and lumen segmentations, which are extracted fully automatically from the images after manual an...

  10. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    Science.gov (United States)

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference

  11. CEST ANALYSIS: AUTOMATED CHANGE DETECTION FROM VERY-HIGH-RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    M. Ehlers

    2012-08-01

    Full Text Available A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST of the change algorithms is applied to calculate the probability of change for a particular location. CEST

  12. AI (artificial intelligence in histopathology--from image analysis to automated diagnosis.

    Directory of Open Access Journals (Sweden)

    Aleksandar Bogovac

    2010-02-01

    Full Text Available The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures and pixel based (texture measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and

  13. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  14. Semi-automated International Cartilage Repair Society scoring of equine articular cartilage lesions in optical coherence tomography images.

    Science.gov (United States)

    Te Moller, N C R; Pitkänen, M; Sarin, J K; Väänänen, S; Liukkonen, J; Afara, I O; Puhakka, P H; Brommer, H; Niemelä, T; Tulamo, R-M; Argüelles Capilla, D; Töyräs, J

    2017-07-01

    Arthroscopic optical coherence tomography (OCT) is a promising tool for the detailed evaluation of articular cartilage injuries. However, OCT-based articular cartilage scoring still relies on the operator's visual estimation. To test the hypothesis that semi-automated International Cartilage Repair Society (ICRS) scoring of chondral lesions seen in OCT images could enhance intra- and interobserver agreement of scoring and its accuracy. Validation study using equine cadaver tissue. Osteochondral samples (n = 99) were prepared from 18 equine metacarpophalangeal joints and imaged using OCT. Custom-made software was developed for semi-automated ICRS scoring of cartilage lesions on OCT images. Scoring was performed visually and semi-automatically by five observers, and levels of inter- and intraobserver agreement were calculated. Subsequently, OCT-based scores were compared with ICRS scores based on light microscopy images of the histological sections of matching locations (n = 82). When semi-automated scoring of the OCT images was performed by multiple observers, mean levels of intraobserver and interobserver agreement were higher than those achieved with visual OCT scoring (83% vs. 77% and 74% vs. 33%, respectively). Histology-based scores from matching regions of interest agreed better with visual OCT-based scoring than with semi-automated OCT scoring; however, the accuracy of the software was improved by optimising the threshold combinations used to determine the ICRS score. Images were obtained from cadavers. Semi-automated scoring software improved the reproducibility of ICRS scoring of chondral lesions in OCT images and made scoring less observer-dependent. The image analysis and segmentation techniques adopted in this study warrant further optimisation to achieve better accuracy with semi-automated ICRS scoring. In addition, studies on in vivo applications are required. © 2016 EVJ Ltd.

  15. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    Science.gov (United States)

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  16. Automated Classification of Lung Cancer Types from Cytological Images Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Teramoto, Atsushi; Tsukamoto, Tetsuya; Kiriyama, Yuka; Fujita, Hiroshi

    2017-01-01

    Lung cancer is a leading cause of death worldwide. Currently, in differential diagnosis of lung cancer, accurate classification of cancer types (adenocarcinoma, squamous cell carcinoma, and small cell carcinoma) is required. However, improving the accuracy and stability of diagnosis is challenging. In this study, we developed an automated classification scheme for lung cancers presented in microscopic images using a deep convolutional neural network (DCNN), which is a major deep learning technique. The DCNN used for classification consists of three convolutional layers, three pooling layers, and two fully connected layers. In evaluation experiments conducted, the DCNN was trained using our original database with a graphics processing unit. Microscopic images were first cropped and resampled to obtain images with resolution of 256 × 256 pixels and, to prevent overfitting, collected images were augmented via rotation, flipping, and filtering. The probabilities of three types of cancers were estimated using the developed scheme and its classification accuracy was evaluated using threefold cross validation. In the results obtained, approximately 71% of the images were classified correctly, which is on par with the accuracy of cytotechnologists and pathologists. Thus, the developed scheme is useful for classification of lung cancers from microscopic images.

  17. PyDBS: an automated image processing workflow for deep brain stimulation surgery.

    Science.gov (United States)

    D'Albis, Tiziano; Haegelen, Claire; Essert, Caroline; Fernández-Vidal, Sara; Lalys, Florent; Jannin, Pierre

    2015-02-01

    Deep brain stimulation (DBS) is a surgical procedure for treating motor-related neurological disorders. DBS clinical efficacy hinges on precise surgical planning and accurate electrode placement, which in turn call upon several image processing and visualization tasks, such as image registration, image segmentation, image fusion, and 3D visualization. These tasks are often performed by a heterogeneous set of software tools, which adopt differing formats and geometrical conventions and require patient-specific parameterization or interactive tuning. To overcome these issues, we introduce in this article PyDBS, a fully integrated and automated image processing workflow for DBS surgery. PyDBS consists of three image processing pipelines and three visualization modules assisting clinicians through the entire DBS surgical workflow, from the preoperative planning of electrode trajectories to the postoperative assessment of electrode placement. The system's robustness, speed, and accuracy were assessed by means of a retrospective validation, based on 92 clinical cases. The complete PyDBS workflow achieved satisfactory results in 92 % of tested cases, with a median processing time of 28 min per patient. The results obtained are compatible with the adoption of PyDBS in clinical practice.

  18. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    Science.gov (United States)

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  19. Automating the Analysis of Spatial Grids A Practical Guide to Data Mining Geospatial Images for Human & Environmental Applications

    CERN Document Server

    Lakshmanan, Valliappa

    2012-01-01

    The ability to create automated algorithms to process gridded spatial data is increasingly important as remotely sensed datasets increase in volume and frequency. Whether in business, social science, ecology, meteorology or urban planning, the ability to create automated applications to analyze and detect patterns in geospatial data is increasingly important. This book provides students with a foundation in topics of digital image processing and data mining as applied to geospatial datasets. The aim is for readers to be able to devise and implement automated techniques to extract information from spatial grids such as radar, satellite or high-resolution survey imagery.

  20. Quantitative Assessment of Mouse Mammary Gland Morphology Using Automated Digital Image Processing and TEB Detection.

    Science.gov (United States)

    Blacher, Silvia; Gérard, Céline; Gallez, Anne; Foidart, Jean-Michel; Noël, Agnès; Péqueux, Christel

    2016-04-01

    The assessment of rodent mammary gland morphology is largely used to study the molecular mechanisms driving breast development and to analyze the impact of various endocrine disruptors with putative pathological implications. In this work, we propose a methodology relying on fully automated digital image analysis methods including image processing and quantification of the whole ductal tree and of the terminal end buds as well. It allows to accurately and objectively measure both growth parameters and fine morphological glandular structures. Mammary gland elongation was characterized by 2 parameters: the length and the epithelial area of the ductal tree. Ductal tree fine structures were characterized by: 1) branch end-point density, 2) branching density, and 3) branch length distribution. The proposed methodology was compared with quantification methods classically used in the literature. This procedure can be transposed to several software and thus largely used by scientists studying rodent mammary gland morphology.

  1. The Automation and Exoplanet Orbital Characterization from the Gemini Planet Imager Exoplanet Survey

    Science.gov (United States)

    Jinfei Wang, Jason; Graham, James; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry; Kalas, Paul; arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Ruffio, Jean-Baptiste; Sivaramakrishnan, Anand; Gemini Planet Imager Exoplanet Survey Collaboration

    2018-01-01

    The Gemini Planet Imager (GPI) Exoplanet Survey (GPIES) is a multi-year 600-star survey to discover and characterize young Jovian exoplanets and their planet forming environments. For large surveys like GPIES, it is critical to have a uniform dataset processed with the latest techniques and calibrations. I will describe the GPI Data Cruncher, an automated data processing framework that is able to generate fully reduced data minutes after the data are taken and can also reprocess the entire campaign in a single day on a supercomputer. The Data Cruncher integrates into a larger automated data processing infrastructure which syncs, logs, and displays the data. I will discuss the benefits of the GPIES data infrastructure, including optimizing observing strategies, finding planets, characterizing instrument performance, and constraining giant planet occurrence. I will also discuss my work in characterizing the exoplanets we have imaged in GPIES through monitoring their orbits. Using advanced data processing algorithms and GPI's precise astrometric calibration, I will show that GPI can achieve one milliarcsecond astrometry on the extensively-studied planet Beta Pic b. With GPI, we can confidently rule out a possible transit of Beta Pic b, but have precise timings on a Hill sphere transit, and I will discuss efforts to search for transiting circumplanetary material this year. I will also discuss the orbital monitoring of other exoplanets as part of GPIES.

  2. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Energy Technology Data Exchange (ETDEWEB)

    Collette, R. [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); King, J., E-mail: kingjc@mines.edu [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); Buesch, C. [Oregon State University, 1500 SW Jefferson St., Corvallis, OR 97331 (United States); Keiser, D.D.; Williams, W.; Miller, B.D.; Schulthess, J. [Nuclear Fuels and Materials Division, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-6188 (United States)

    2016-07-15

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program. - Highlights: • Automated image processing is used to extract fission gas bubble data from irradiated U−Mo fuel samples. • Verification and validation tests are performed to ensure the algorithm's accuracy. • Fission bubble parameters are predictably difficult to compare across samples of varying compositions. • The 2-D results suggest the need for more homogenized fuel sampling in future studies. • The results also demonstrate the value of 3-D reconstruction techniques.

  3. Global optimal hybrid geometric active contour for automated lung segmentation on CT images.

    Science.gov (United States)

    Zhang, Weihang; Wang, Xue; Zhang, Pengbo; Chen, Junfeng

    2017-12-01

    Lung segmentation on thoracic CT images plays an important role in early detection, diagnosis and 3D visualization of lung cancer. The segmentation accuracy, stability, and efficiency of serial CT scans have a significant impact on the performance of computer-aided detection. This paper proposes a global optimal hybrid geometric active contour model for automated lung segmentation on CT images. Firstly, the combination of global region and edge information leads to high segmentation accuracy in lung regions with weak boundaries or narrow bands. Secondly, due to the global optimality of energy functional, the proposed model is robust to the initial position of level set function and requires fewer iterations. Thus, the stability and efficiency of lung segmentation on serial CT slices can be greatly improved by taking advantage of the information between adjacent slices. In addition, to achieve the whole process of automated segmentation for lung cancer, two assistant algorithms based on prior shape and anatomical knowledge are proposed. The algorithms not only automatically separate the left and right lungs, but also include juxta-pleural tumors into the segmentation result. The proposed method was quantitatively validated on subjects from the publicly available LIDC-IDRI and our own data sets. Exhaustive experimental results demonstrate the superiority and competency of our method, especially compared with the typical edge-based geometric active contour model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Automated image classification applied to reconstituted human corneal epithelium for the early detection of toxic damage

    Science.gov (United States)

    Crosta, Giovanni Franco; Urani, Chiara; De Servi, Barbara; Meloni, Marisa

    2010-02-01

    For a long time acute eye irritation has been assessed by means of the DRAIZE rabbit test, the limitations of which are known. Alternative tests based on in vitro models have been proposed. This work focuses on the "reconstituted human corneal epithelium" (R-HCE), which resembles the corneal epithelium of the human eye by thickness, morphology and marker expression. Testing a substance on R-HCE involves a variety of methods. Herewith quantitative morphological analysis is applied to optical microscope images of R-HCE cross sections resulting from exposure to benzalkonium chloride (BAK). The short term objectives and the first results are the analysis and classification of said images. Automated analysis relies on feature extraction by the spectrum-enhancement algorithm, which is made sensitive to anisotropic morphology, and classification based on principal components analysis. The winning strategy has been the separate analysis of the apical and basal layers, which carry morphological information of different types. R-HCE specimens have been ranked by gross damage. The onset of early damage has been detected and an R-HCE specimen exposed to a low BAK dose has been singled out from the negative and positive control. These results provide a proof of principle for the automated classification of the specimens of interest on a purely morphological basis by means of the spectrum enhancement algorithm.

  5. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    Directory of Open Access Journals (Sweden)

    Marcin Andrzej KUREK

    2015-01-01

    Full Text Available Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC were conducted. The particles were measured at two points: dry and after water soaking. The most significant water holding capacity (7.00 g water/g solid was achieved by the smaller sized oat fiber. Conversely, the water holding capacity was highest (4.20 g water/g solid in larger sized beet fiber. There was evidence for water absorption increasing with a decrease in particle size in regards to the same fiber source. Very strong correlations were drawn between particle shape parameters, such as fiber length, straightness, width and hydration properties measured conventionally. The regression analysis provided the opportunity to estimate whether the automated static image analysis method could be an efficient tool in describing the hydration properties of dietary fiber. The application of the method was validated using mathematical model which was verified in comparison to conventional WHC measurement results.

  6. Application of Automated Image-guided Patch Clamp for the Study of Neurons in Brain Slices.

    Science.gov (United States)

    Wu, Qiuyu; Chubykin, Alexander A

    2017-07-31

    Whole-cell patch clamp is the gold-standard method to measure the electrical properties of single cells. However, the in vitro patch clamp remains a challenging and low-throughput technique due to its complexity and high reliance on user operation and control. This manuscript demonstrates an image-guided automatic patch clamp system for in vitro whole-cell patch clamp experiments in acute brain slices. Our system implements a computer vision-based algorithm to detect fluorescently labeled cells and to target them for fully automatic patching using a micromanipulator and internal pipette pressure control. The entire process is highly automated, with minimal requirements for human intervention. Real-time experimental information, including electrical resistance and internal pipette pressure, are documented electronically for future analysis and for optimization to different cell types. Although our system is described in the context of acute brain slice recordings, it can also be applied to the automated image-guided patch clamp of dissociated neurons, organotypic slice cultures, and other non-neuronal cell types.

  7. An Automated Algorithm for Identifying and Tracking Transverse Waves in Solar Images

    Science.gov (United States)

    Weberg, Micah J.; Morton, Richard J.; McLaughlin, James A.

    2018-01-01

    Recent instrumentation has demonstrated that the solar atmosphere supports omnipresent transverse waves, which could play a key role in energizing the solar corona. Large-scale studies are required in order to build up an understanding of the general properties of these transverse waves. To help facilitate this, we present an automated algorithm for identifying and tracking features in solar images and extracting the wave properties of any observed transverse oscillations. We test and calibrate our algorithm using a set of synthetic data, which includes noise and rotational effects. The results indicate an accuracy of 1%–2% for displacement amplitudes and 4%–10% for wave periods and velocity amplitudes. We also apply the algorithm to data from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory and find good agreement with previous studies. Of note, we find that 35%–41% of the observed plumes exhibit multiple wave signatures, which indicates either the superposition of waves or multiple independent wave packets observed at different times within a single structure. The automated methods described in this paper represent a significant improvement on the speed and quality of direct measurements of transverse waves within the solar atmosphere. This algorithm unlocks a wide range of statistical studies that were previously impractical.

  8. Automated analysis of heterogeneous carbon nanostructures by high-resolution electron microscopy and on-line image processing

    Energy Technology Data Exchange (ETDEWEB)

    Toth, P., E-mail: toth.pal@uni-miskolc.hu [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States); Farrer, J.K. [Department of Physics and Astronomy, Brigham Young University, N283 ESC, Provo, UT 84602 (United States); Palotas, A.B. [Department of Combustion Technology and Thermal Energy, University of Miskolc, H3515, Miskolc-Egyetemvaros (Hungary); Lighty, J.S.; Eddings, E.G. [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States)

    2013-06-15

    High-resolution electron microscopy is an efficient tool for characterizing heterogeneous nanostructures; however, currently the analysis is a laborious and time-consuming manual process. In order to be able to accurately and robustly quantify heterostructures, one must obtain a statistically high number of micrographs showing images of the appropriate sub-structures. The second step of analysis is usually the application of digital image processing techniques in order to extract meaningful structural descriptors from the acquired images. In this paper it will be shown that by applying on-line image processing and basic machine vision algorithms, it is possible to fully automate the image acquisition step; therefore, the number of acquired images in a given time can be increased drastically without the need for additional human labor. The proposed automation technique works by computing fields of structural descriptors in situ and thus outputs sets of the desired structural descriptors in real-time. The merits of the method are demonstrated by using combustion-generated black carbon samples. - Highlights: ► The HRTEM analysis of heterogeneous nanostructures is a tedious manual process. ► Automatic HRTEM image acquisition and analysis can improve data quantity and quality. ► We propose a method based on on-line image analysis for the automation of HRTEM image acquisition. ► The proposed method is demonstrated using HRTEM images of soot particles.

  9. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya.

    Science.gov (United States)

    Hansen, Morten B; Abràmoff, Michael D; Folk, James C; Mathenge, Wanjiku; Bastawrous, Andrew; Peto, Tunde

    2015-01-01

    Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields Reading Centre on the population of Nakuru Study from Kenya. Retinal images were taken from participants of the Nakuru Eye Disease Study in Kenya in 2007/08 (n = 4,381 participants [NW6 Topcon Digital Retinal Camera]). First, human grading was performed for the presence or absence of DR, and for those with DR this was sub-divided in to referable or non-referable DR. The automated IDP software was deployed to identify those with DR and also to categorize the severity of DR. The primary outcomes were sensitivity, specificity, and positive and negative predictive value of IDP versus the human grader as reference standard. Altogether 3,460 participants were included. 113 had DED, giving a prevalence of 3.3% (95% CI, 2.7-3.9%). Sensitivity of the IDP to detect DED as by the human grading was 91.0% (95% CI, 88.0-93.4%). The IDP ability to detect DED gave an AUC of 0.878 (95% CI 0.850-0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment. In this epidemiological sample, the IDP's grading was comparable to that of human graders'. It therefore might be feasible to consider inclusion into usual epidemiological grading.

  10. Automated image quality evaluation of T2-weighted liver MRI utilizing deep learning architecture.

    Science.gov (United States)

    Esses, Steven J; Lu, Xiaoguang; Zhao, Tiejun; Shanbhogue, Krishna; Dane, Bari; Bruno, Mary; Chandarana, Hersh

    2018-03-01

    To develop and test a deep learning approach named Convolutional Neural Network (CNN) for automated screening of T 2 -weighted (T 2 WI) liver acquisitions for nondiagnostic images, and compare this automated approach to evaluation by two radiologists. We evaluated 522 liver magnetic resonance imaging (MRI) exams performed at 1.5T and 3T at our institution between November 2014 and May 2016 for CNN training and validation. The CNN consisted of an input layer, convolutional layer, fully connected layer, and output layer. 351 T 2 WI were anonymized for training. Each case was annotated with a label of being diagnostic or nondiagnostic for detecting lesions and assessing liver morphology. Another independently collected 171 cases were sequestered for a blind test. These 171 T 2 WI were assessed independently by two radiologists and annotated as being diagnostic or nondiagnostic. These 171 T 2 WI were presented to the CNN algorithm and image quality (IQ) output of the algorithm was compared to that of two radiologists. There was concordance in IQ label between Reader 1 and CNN in 79% of cases and between Reader 2 and CNN in 73%. The sensitivity and the specificity of the CNN algorithm in identifying nondiagnostic IQ was 67% and 81% with respect to Reader 1 and 47% and 80% with respect to Reader 2. The negative predictive value of the algorithm for identifying nondiagnostic IQ was 94% and 86% (relative to Readers 1 and 2). We demonstrate a CNN algorithm that yields a high negative predictive value when screening for nondiagnostic T 2 WI of the liver. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:723-728. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Automated discrimination of lower and higher grade gliomas based on histopathological image analysis

    Directory of Open Access Journals (Sweden)

    Hojjat Seyed Mousavi

    2015-01-01

    Full Text Available Introduction: Histopathological images have rich structural information, are multi-channel in nature and contain meaningful pathological information at various scales. Sophisticated image analysis tools that can automatically extract discriminative information from the histopathology image slides for diagnosis remain an area of significant research activity. In this work, we focus on automated brain cancer grading, specifically glioma grading. Grading of a glioma is a highly important problem in pathology and is largely done manually by medical experts based on an examination of pathology slides (images. To complement the efforts of clinicians engaged in brain cancer diagnosis, we develop novel image processing algorithms and systems to automatically grade glioma tumor into two categories: Low-grade glioma (LGG and high-grade glioma (HGG which represent a more advanced stage of the disease. Results: We propose novel image processing algorithms based on spatial domain analysis for glioma tumor grading that will complement the clinical interpretation of the tissue. The image processing techniques are developed in close collaboration with medical experts to mimic the visual cues that a clinician looks for in judging of the grade of the disease. Specifically, two algorithmic techniques are developed: (1 A cell segmentation and cell-count profile creation for identification of Pseudopalisading Necrosis, and (2 a customized operation of spatial and morphological filters to accurately identify microvascular proliferation (MVP. In both techniques, a hierarchical decision is made via a decision tree mechanism. If either Pseudopalisading Necrosis or MVP is found present in any part of the histopathology slide, the whole slide is identified as HGG, which is consistent with World Health Organization guidelines. Experimental results on the Cancer Genome Atlas database are presented in the form of: (1 Successful detection rates of pseudopalisading necrosis

  12. Long-term live cell imaging and automated 4D analysis of drosophila neuroblast lineages.

    Directory of Open Access Journals (Sweden)

    Catarina C F Homem

    Full Text Available The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain.

  13. Automated detection of synapses in serial section transmission electron microscopy image stacks.

    Directory of Open Access Journals (Sweden)

    Anna Kreshuk

    Full Text Available We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem.

  14. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    Science.gov (United States)

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  15. Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images

    Directory of Open Access Journals (Sweden)

    Pachiyappan Arulmozhivarman

    2012-06-01

    Full Text Available Abstract We describe a system for the automated diagnosis of diabetic retinopathy and glaucoma using fundus and optical coherence tomography (OCT images. Automatic screening will help the doctors to quickly identify the condition of the patient in a more accurate way. The macular abnormalities caused due to diabetic retinopathy can be detected by applying morphological operations, filters and thresholds on the fundus images of the patient. Early detection of glaucoma is done by estimating the Retinal Nerve Fiber Layer (RNFL thickness from the OCT images of the patient. The RNFL thickness estimation involves the use of active contours based deformable snake algorithm for segmentation of the anterior and posterior boundaries of the retinal nerve fiber layer. The algorithm was tested on a set of 89 fundus images of which 85 were found to have at least mild retinopathy and OCT images of 31 patients out of which 13 were found to be glaucomatous. The accuracy for optical disk detection is found to be 97.75%. The proposed system therefore is accurate, reliable and robust and can be realized.

  16. An automated retinal imaging method for the early diagnosis of diabetic retinopathy.

    Science.gov (United States)

    Franklin, S Wilfred; Rajan, S Edward

    2013-01-01

    Diabetic retinopathy is a microvascular complication of long-term diabetes and is the major cause for eyesight loss due to changes in blood vessels of the retina. Major vision loss due to diabetic retinopathy is highly preventable with regular screening and timely intervention at the earlier stages. Retinal blood vessel segmentation methods help to identify the successive stages of such sight threatening diseases like diabetes. To develop and test a novel retinal imaging method which segments the blood vessels automatically from retinal images, which helps the ophthalmologists in the diagnosis and follow-up of diabetic retinopathy. This method segments each image pixel as vessel or nonvessel, which in turn, used for automatic recognition of the vasculature in retinal images. Retinal blood vessels were identified by means of a multilayer perceptron neural network, for which the inputs were derived from the Gabor and moment invariants-based features. Back propagation algorithm, which provides an efficient technique to change the weights in a feed forward network, is utilized in our method. Quantitative results of sensitivity, specificity and predictive values were obtained in our method and the measured accuracy of our segmentation algorithm was 95.3%, which is better than that presented by state-of-the-art approaches. The evaluation procedure used and the demonstrated effectiveness of our automated retinal imaging method proves itself as the most powerful tool to diagnose diabetic retinopathy in the earlier stages.

  17. Integrating image processing and classification technology into automated polarizing film defect inspection

    Science.gov (United States)

    Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun

    2018-05-01

    In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.

  18. Automated Outcome Classification of Computed Tomography Imaging Reports for Pediatric Traumatic Brain Injury.

    Science.gov (United States)

    Yadav, Kabir; Sarioglu, Efsun; Choi, Hyeong Ah; Cartwright, Walter B; Hinds, Pamela S; Chamberlain, James M

    2016-02-01

    The authors have previously demonstrated highly reliable automated classification of free-text computed tomography (CT) imaging reports using a hybrid system that pairs linguistic (natural language processing) and statistical (machine learning) techniques. Previously performed for identifying the outcome of orbital fracture in unprocessed radiology reports from a clinical data repository, the performance has not been replicated for more complex outcomes. To validate automated outcome classification performance of a hybrid natural language processing (NLP) and machine learning system for brain CT imaging reports. The hypothesis was that our system has performance characteristics for identifying pediatric traumatic brain injury (TBI). This was a secondary analysis of a subset of 2,121 CT reports from the Pediatric Emergency Care Applied Research Network (PECARN) TBI study. For that project, radiologists dictated CT reports as free text, which were then deidentified and scanned as PDF documents. Trained data abstractors manually coded each report for TBI outcome. Text was extracted from the PDF files using optical character recognition. The data set was randomly split evenly for training and testing. Training patient reports were used as input to the Medical Language Extraction and Encoding (MedLEE) NLP tool to create structured output containing standardized medical terms and modifiers for negation, certainty, and temporal status. A random subset stratified by site was analyzed using descriptive quantitative content analysis to confirm identification of TBI findings based on the National Institute of Neurological Disorders and Stroke (NINDS) Common Data Elements project. Findings were coded for presence or absence, weighted by frequency of mentions, and past/future/indication modifiers were filtered. After combining with the manual reference standard, a decision tree classifier was created using data mining tools WEKA 3.7.5 and Salford Predictive Miner 7

  19. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    Science.gov (United States)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer

  20. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  1. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  2. Automated Analysis of 123I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    International Nuclear Information System (INIS)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo

    2014-01-01

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4- 123 I-iodophenyl)tropane ( 123 I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional 123 I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease

  3. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  4. Ranking quantitative resistance to Septoria tritici blotch in elite wheat cultivars using automated image analysis.

    Science.gov (United States)

    Karisto, Petteri; Hund, Andreas; Yu, Kang; Anderegg, Jonas; Walter, Achim; Mascher, Fabio; McDonald, Bruce A; Mikaberidze, Alexey

    2017-12-06

    Quantitative resistance is likely to be more durable than major gene resistance for controlling Septoria tritici blotch (STB) on wheat. Earlier studies hypothesized that resistance affecting the degree of host damage, as measured by the percentage of leaf area covered by STB lesions, is distinct from resistance that affects pathogen reproduction, as measured by the density of pycnidia produced within lesions. We tested this hypothesis using a collection of 335 elite European winter wheat cultivars that was naturally infected by a diverse population of Zymoseptoria tritici in a replicated field experiment. We used automated image analysis (AIA) of 21420 scanned wheat leaves to obtain quantitative measures of conditional STB intesity that were precise, objective, and reproducible. These measures allowed us to explicitly separate resistance affecting host damage from resistance affecting pathogen reproduction, enabling us to confirm that these resistance traits are largely independent. The cultivar rankings based on host damage were different from the rankings based on pathogen reproduction, indicating that the two forms of resistance should be considered separately in breeding programs aiming to increase STB resistance. We hypothesize that these different forms of resistance are under separate genetic control, enabling them to be recombined to form new cultivars that are highly resistant to STB. We found a significant correlation between rankings based on automated image analysis and rankings based on traditional visual scoring, suggesting that image analysis can complement conventional measurements of STB resistance, based largely on host damage, while enabling a much more precise measure of pathogen reproduction. We showed that measures of pathogen reproduction early in the growing season were the best predictors of host damage late in the growing season, illustrating the importance of breeding for resistance that reduces pathogen reproduction in order to minimize

  5. Computerized detection of breast cancer on automated breast ultrasound imaging of women with dense breasts

    Energy Technology Data Exchange (ETDEWEB)

    Drukker, Karen, E-mail: kdrukker@uchicago.edu; Sennett, Charlene A.; Giger, Maryellen L. [Department of Radiology, MC2026, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2014-01-15

    Purpose: Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. Methods: The HIPAA compliant study involved a dataset of volumetric ultrasound image data, “views,” acquired with an automated U-Systems Somo•V{sup ®} ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patients (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of “marks” (detections) per view. Results: At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2—similar to radiologists’ performance sensitivity (49.9%) for this dataset from a prior reader study—and 45.9% (28/61) ± 4% for all patients. Conclusions: Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.

  6. AUTOMATED DETECTION OF GALAXY-SCALE GRAVITATIONAL LENSES IN HIGH-RESOLUTION IMAGING DATA

    International Nuclear Information System (INIS)

    Marshall, Philip J.; Bradac, Marusa; Hogg, David W.; Moustakas, Leonidas A.; Fassnacht, Christopher D.; Schrabback, Tim; Blandford, Roger D.

    2009-01-01

    We expect direct lens modeling to be the key to successful and meaningful automated strong galaxy-scale gravitational lens detection. We have implemented a lens-modeling 'robot' that treats every bright red galaxy (BRG) in a large imaging survey as a potential gravitational lens system. Having optimized a simple model for 'typical' galaxy-scale gravitational lenses, we generate four assessments of model quality that are then used in an automated classification. The robot infers from these four data the lens classification parameter H that a human would have assigned; the inference is performed using a probability distribution generated from a human-classified training set of candidates, including realistic simulated lenses and known false positives drawn from the Hubble Space Telescope (HST) Extended Groth Strip (EGS) survey. We compute the expected purity, completeness, and rejection rate, and find that these statistics can be optimized for a particular application by changing the prior probability distribution for H; this is equivalent to defining the robot's 'character'. Adopting a realistic prior based on expectations for the abundance of lenses, we find that a lens sample may be generated that is ∼100% pure, but only ∼20% complete. This shortfall is due primarily to the oversimplicity of the model of both the lens light and mass. With a more optimistic robot, ∼90% completeness can be achieved while rejecting ∼90% of the candidate objects. The remaining candidates must be classified by human inspectors. Displaying the images used and produced by the robot on a custom 'one-click' web interface, we are able to inspect and classify lens candidates at a rate of a few seconds per system, suggesting that a future 1000 deg. 2 imaging survey containing 10 7 BRGs, and some 10 4 lenses, could be successfully, and reproducibly, searched in a modest amount of time. We have verified our projected survey statistics, albeit at low significance, using the HST EGS data

  7. Automated localization and segmentation techniques for B-mode ultrasound images: A review.

    Science.gov (United States)

    Meiburger, Kristen M; Acharya, U Rajendra; Molinari, Filippo

    2018-01-01

    B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Vision 20/20: Perspectives on automated image segmentation for radiotherapy

    Science.gov (United States)

    Sharp, Gregory; Fritscher, Karl D.; Pekar, Vladimir; Peroni, Marta; Shusharina, Nadya; Veeraraghavan, Harini; Yang, Jinzhong

    2014-01-01

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods’ strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology. PMID:24784366

  9. Automated epidermis segmentation in histopathological images of human skin stained with hematoxylin and eosin

    Science.gov (United States)

    Kłeczek, Paweł; Dyduch, Grzegorz; Jaworek-Korjakowska, Joanna; Tadeusiewicz, Ryszard

    2017-03-01

    Background: Epidermis area is an important observation area for the diagnosis of inflammatory skin diseases and skin cancers. Therefore, in order to develop a computer-aided diagnosis system, segmentation of the epidermis area is usually an essential, initial step. This study presents an automated and robust method for epidermis segmentation in whole slide histopathological images of human skin, stained with hematoxylin and eosin. Methods: The proposed method performs epidermis segmentation based on the information about shape and distribution of transparent regions in a slide image and information about distribution and concentration of hematoxylin and eosin stains. It utilizes domain-specific knowledge of morphometric and biochemical properties of skin tissue elements to segment the relevant histopathological structures in human skin. Results: Experimental results on 88 skin histopathological images from three different sources show that the proposed method segments the epidermis with a mean sensitivity of 87 %, a mean specificity of 95% and a mean precision of 57%. It is robust to inter- and intra-image variations in both staining and illumination, and makes no assumptions about the type of skin disorder. The proposed method provides a superior performance compared to the existing techniques.

  10. Vision 20/20: Perspectives on automated image segmentation for radiotherapy

    International Nuclear Information System (INIS)

    Sharp, Gregory; Fritscher, Karl D.; Shusharina, Nadya; Pekar, Vladimir; Peroni, Marta; Veeraraghavan, Harini; Yang, Jinzhong

    2014-01-01

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods’ strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology

  11. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    Science.gov (United States)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  12. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim R.

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  13. Automated spine and vertebrae detection in CT images using object-based image analysis.

    Science.gov (United States)

    Schwier, M; Chitiboi, T; Hülnhagen, T; Hahn, H K

    2013-09-01

    Although computer assistance has become common in medical practice, some of the most challenging tasks that remain unsolved are in the area of automatic detection and recognition. The human visual perception is in general far superior to computer vision algorithms. Object-based image analysis is a relatively new approach that aims to lift image analysis from a pixel-based processing to a semantic region-based processing of images. It allows effective integration of reasoning processes and contextual concepts into the recognition method. In this paper, we present an approach that applies object-based image analysis to the task of detecting the spine in computed tomography images. A spine detection would be of great benefit in several contexts, from the automatic labeling of vertebrae to the assessment of spinal pathologies. We show with our approach how region-based features, contextual information and domain knowledge, especially concerning the typical shape and structure of the spine and its components, can be used effectively in the analysis process. The results of our approach are promising with a detection rate for vertebral bodies of 96% and a precision of 99%. We also gain a good two-dimensional segmentation of the spine along the more central slices and a coarse three-dimensional segmentation. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Chen, Ken-Chung; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  15. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 (United States); Chen, Ken-Chung; Tang, Zhen [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Xia, James J., E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University School of Medicine, Shanghai Ninth People’s Hospital, Shanghai 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841 (Korea, Republic of)

    2016-01-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  16. Small sample sorting of primary adherent cells by automated micropallet imaging and release.

    Science.gov (United States)

    Shah, Pavak K; Herrera-Loeza, Silvia Gabriela; Sims, Christopher E; Yeh, Jen Jen; Allbritton, Nancy L

    2014-07-01

    Primary patient samples are the gold standard for molecular investigations of tumor biology yet are difficult to acquire, heterogeneous in nature and variable in size. Patient-derived xenografts (PDXs) comprised of primary tumor tissue cultured in host organisms such as nude mice permit the propagation of human tumor samples in an in vivo environment and closely mimic the phenotype and gene expression profile of the primary tumor. Although PDX models reduce the cost and complexity of acquiring sample tissue and permit repeated sampling of the primary tumor, these samples are typically contaminated by immune, blood, and vascular tissues from the host organism while also being limited in size. For very small tissue samples (on the order of 10(3) cells) purification by fluorescence-activated cell sorting (FACS) is not feasible while magnetic activated cell sorting (MACS) of small samples results in very low purity, low yield, and poor viability. We developed a platform for imaging cytometry integrated with micropallet array technology to perform automated cell sorting on very small samples obtained from PDX models of pancreatic and colorectal cancer using antibody staining of EpCAM (CD326) as a selection criteria. These data demonstrate the ability to automate and efficiently separate samples with very low number of cells. © 2014 International Society for Advancement of Cytometry.

  17. MRF-ANN: a machine learning approach for automated ER scoring of breast cancer immunohistochemical images.

    Science.gov (United States)

    Mungle, T; Tewary, S; DAS, D K; Arun, I; Basak, B; Agarwal, S; Ahmed, R; Chatterjee, S; Chakraborty, C

    2017-08-01

    Molecular pathology, especially immunohistochemistry, plays an important role in evaluating hormone receptor status along with diagnosis of breast cancer. Time-consumption and inter-/intraobserver variability are major hindrances for evaluating the receptor score. In view of this, the paper proposes an automated Allred Scoring methodology for estrogen receptor (ER). White balancing is used to normalize the colour image taking into consideration colour variation during staining in different labs. Markov random field model with expectation-maximization optimization is employed to segment the ER cells. The proposed segmentation methodology is found to have F-measure 0.95. Artificial neural network is subsequently used to obtain intensity-based score for ER cells, from pixel colour intensity features. Simultaneously, proportion score - percentage of ER positive cells is computed via cell counting. The final ER score is computed by adding intensity and proportion scores - a standard Allred scoring system followed by pathologists. The classification accuracy for classification of cells by classifier in terms of F-measure is 0.9626. The problem of subjective interobserver ability is addressed by quantifying ER score from two expert pathologist and proposed methodology. The intraclass correlation achieved is greater than 0.90. The study has potential advantage of assisting pathologist in decision making over manual procedure and could evolve as a part of automated decision support system with other receptor scoring/analysis procedure. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  18. Sci-Thur AM: YIS – 08: Automated Imaging Quality Assurance for Image-Guided Small Animal Irradiators

    International Nuclear Information System (INIS)

    Johnstone, Chris; Bazalova-Carter, Magdalena

    2016-01-01

    Purpose: To develop quality assurance (QA) standards and tolerance levels for image quality of small animal irradiators. Methods: A fully automated in-house QA software for image analysis of a commercial microCT phantom was created. Quantitative analyses of CT linearity, signal-to-noise ratio (SNR), uniformity and noise, geometric accuracy, modulation transfer function (MTF), and CT number evaluation was performed. Phantom microCT scans from seven institutions acquired with varying parameters (kVp, mA, time, voxel size, and frame rate) and five irradiator units (Xstrahl SARRP, PXI X-RAD 225Cx, PXI X-RAD SmART, GE explore CT/RT 140, and GE Explore CT 120) were analyzed. Multi-institutional data sets were compared using our in-house software to establish pass/fail criteria for each QA test. Results: CT linearity (R2>0.996) was excellent at all but Institution 2. Acceptable SNR (>35) and noise levels (<55HU) were obtained at four of the seven institutions, where failing scans were acquired with less than 120mAs. Acceptable MTF (>1.5 lp/mm for MTF=0.2) was obtained at all but Institution 6 due to the largest scan voxel size (0.35mm). The geometric accuracy passed (<1.5%) at five of the seven institutions. Conclusion: Our QA software can be used to rapidly perform quantitative imaging QA for small animal irradiators, accumulate results over time, and display possible changes in imaging functionality from its original performance and/or from the recommended tolerance levels. This tool will aid researchers in maintaining high image quality, enabling precise conformal dose delivery to small animals.

  19. Automated Image Analysis of Lung Branching Morphogenesis from Microscopic Images of Fetal Rat Explants

    Directory of Open Access Journals (Sweden)

    Pedro L. Rodrigues

    2014-01-01

    Full Text Available Background. Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development from microscopic images. Methods. The outer contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers.

  20. Automated Segmentation of in Vivo and Ex Vivo Mouse Brain Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Alize E.H. Scheenstra

    2009-01-01

    Full Text Available Segmentation of magnetic resonance imaging (MRI data is required for many applications, such as the comparison of different structures or time points, and for annotation purposes. Currently, the gold standard for automated image segmentation is nonlinear atlas-based segmentation. However, these methods are either not sufficient or highly time consuming for mouse brains, owing to the low signal to noise ratio and low contrast between structures compared with other applications. We present a novel generic approach to reduce processing time for segmentation of various structures of mouse brains, in vivo and ex vivo. The segmentation consists of a rough affine registration to a template followed by a clustering approach to refine the rough segmentation near the edges. Compared with manual segmentations, the presented segmentation method has an average kappa index of 0.7 for 7 of 12 structures in in vivo MRI and 11 of 12 structures in ex vivo MRI. Furthermore, we found that these results were equal to the performance of a nonlinear segmentation method, but with the advantage of being 8 times faster. The presented automatic segmentation method is quick and intuitive and can be used for image registration, volume quantification of structures, and annotation.

  1. Automated detection of midsagittal plane in MR images of the head

    Science.gov (United States)

    Wang, Deming; Chalk, Jonathan B.; Doddrell, David M.; Semple, James

    2001-07-01

    A fully automated and robust method is presented for dividing MR 3D images of the human brain into two hemispheres. The method is developed specifically to deal with pathologically affected brains or brains in which the longitudinal fissure (LF) is significantly widened due to ageing or atrophy associated with neuro-degenerative processes. To provide a definitive estimate of the mid- sagittal plane, the method combines longitudinal fissure lines detected in both axial and corona slices of T1- weighted MR images and then fit these lines to a 3D plane. The method was applied to 36 brain MR image data sets (15 of them arising from subjects with probable Alzheimer's disease) all exhibiting some degrees of widened fissures and/or significant asymmetry due to pathology. Visual inspection of the results revealed that the separation was highly accurate and satisfactory. In some cases (5 in total), there were minor degrees of asymmetry in the posterior fossa structures despite successful splitting of cerebral cortex.

  2. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images.

    Science.gov (United States)

    Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra

    2017-05-01

    Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.

  3. Automated analysis of retinal imaging using machine learning techniques for computer vision.

    Science.gov (United States)

    De Fauw, Jeffrey; Keane, Pearse; Tomasev, Nenad; Visentin, Daniel; van den Driessche, George; Johnson, Mike; Hughes, Cian O; Chu, Carlton; Ledsam, Joseph; Back, Trevor; Peto, Tunde; Rees, Geraint; Montgomery, Hugh; Raine, Rosalind; Ronneberger, Olaf; Cornebise, Julien

    2016-01-01

    There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular ("wet") age-related macular degeneration (wet AMD) and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the 'back' of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves). Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  4. A methodology for automated CPA extraction using liver biopsy image analysis and machine learning techniques.

    Science.gov (United States)

    Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas

    2017-03-01

    Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); Rahmat, K. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); University Malaya, Biomedical Imaging Department, Kuala Lumpur (Malaysia); Ariffin, H. [University of Malaya, Department of Paediatrics, Faculty of Medicine, Kuala Lumpur (Malaysia)

    2012-07-15

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  6. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    International Nuclear Information System (INIS)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M.; Rahmat, K.; Ariffin, H.

    2012-01-01

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  7. Hyper-Cam automated calibration method for continuous hyperspectral imaging measurements

    Science.gov (United States)

    Gagnon, Jean-Philippe; Habte, Zewdu; George, Jacks; Farley, Vincent; Tremblay, Pierre; Chamberland, Martin; Romano, Joao; Rosario, Dalton

    2010-04-01

    The midwave and longwave infrared regions of the electromagnetic spectrum contain rich information which can be captured by hyperspectral sensors thus enabling enhanced detection of targets of interest. A continuous hyperspectral imaging measurement capability operated 24/7 over varying seasons and weather conditions permits the evaluation of hyperspectral imaging for detection of different types of targets in real world environments. Such a measurement site was built at Picatinny Arsenal under the Spectral and Polarimetric Imagery Collection Experiment (SPICE), where two Hyper-Cam hyperspectral imagers are installed at the Precision Armament Laboratory (PAL) and are operated autonomously since Fall of 2009. The Hyper-Cam are currently collecting a complete hyperspectral database that contains the MWIR and LWIR hyperspectral measurements of several targets under day, night, sunny, cloudy, foggy, rainy and snowy conditions. The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis capabilities using a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The MWIR version covers the 3 to 5 μm spectral range and the LWIR version covers the 8 to 12 μm spectral range. This paper describes the automated operation of the two Hyper-Cam sensors being used in the SPICE data collection. The Reveal Automation Control Software (RACS) developed collaboratively between Telops, ARDEC, and ARL enables flexible operating parameters and autonomous calibration. Under the RACS software, the Hyper-Cam sensors can autonomously calibrate itself using their internal blackbody targets, and the calibration events are initiated by user defined time intervals and on internal beamsplitter temperature monitoring. The RACS software is the first software developed for

  8. Automated melanoma detection with a novel multispectral imaging system: results of a prospective study

    International Nuclear Information System (INIS)

    Tomatis, Stefano; Carrara, Mauro; Bono, Aldo; Bartoli, Cesare; Lualdi, Manuela; Tragni, Gabrina; Colombo, Ambrogio; Marchesini, Renato

    2005-01-01

    The aim of this research was to evaluate the performance of a new spectroscopic system in the diagnosis of melanoma. This study involves a consecutive series of 1278 patients with 1391 cutaneous pigmented lesions including 184 melanomas. In an attempt to approach the 'real world' of lesion population, a further set of 1022 not excised clinically reassuring lesions was also considered for analysis. Each lesion was imaged in vivo by a multispectral imaging system. The system operates at wavelengths between 483 and 950 nm by acquiring 15 images at equally spaced wavelength intervals. From the images, different lesion descriptors were extracted related to the colour distribution and morphology of the lesions. Data reduction techniques were applied before setting up a neural network classifier designed to perform automated diagnosis. The data set was randomly divided into three sets: train (696 lesions, including 90 melanomas) and verify (348 lesions, including 53 melanomas) for the instruction of a proper neural network, and an independent test set (347 lesions, including 41 melanomas). The neural network was able to discriminate between melanomas and non-melanoma lesions with a sensitivity of 80.4% and a specificity of 75.6% in the 1391 histologized cases data set. No major variations were found in classification scores when train, verify and test subsets were separately evaluated. Following receiver operating characteristic (ROC) analysis, the resulting area under the curve was 0.85. No significant differences were found among areas under train, verify and test set curves, supporting the good network ability to generalize for new cases. In addition, specificity and area under ROC curve increased up to 90% and 0.90, respectively, when the additional set of 1022 lesions without histology was added to the test set. Our data show that performance of an automated system is greatly population dependent, suggesting caution in the comparison with results reported in the

  9. A Novel Automated High-Content Analysis Workflow Capturing Cell Population Dynamics from Induced Pluripotent Stem Cell Live Imaging Data.

    Science.gov (United States)

    Kerz, Maximilian; Folarin, Amos; Meleckyte, Ruta; Watt, Fiona M; Dobson, Richard J; Danovi, Davide

    2016-10-01

    Most image analysis pipelines rely on multiple channels per image with subcellular reference points for cell segmentation. Single-channel phase-contrast images are often problematic, especially for cells with unfavorable morphology, such as induced pluripotent stem cells (iPSCs). Live imaging poses a further challenge, because of the introduction of the dimension of time. Evaluations cannot be easily integrated with other biological data sets including analysis of endpoint images. Here, we present a workflow that incorporates a novel CellProfiler-based image analysis pipeline enabling segmentation of single-channel images with a robust R-based software solution to reduce the dimension of time to a single data point. These two packages combined allow robust segmentation of iPSCs solely on phase-contrast single-channel images and enable live imaging data to be easily integrated to endpoint data sets while retaining the dynamics of cellular responses. The described workflow facilitates characterization of the response of live-imaged iPSCs to external stimuli and definition of cell line-specific, phenotypic signatures. We present an efficient tool set for automated high-content analysis suitable for cells with challenging morphology. This approach has potentially widespread applications for human pluripotent stem cells and other cell types. © 2016 Society for Laboratory Automation and Screening.

  10. Development of Automated Image Analysis Tools for Verification of Radiotherapy Field Accuracy with AN Electronic Portal Imaging Device.

    Science.gov (United States)

    Dong, Lei

    1995-01-01

    The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5^ circ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1^ circ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross -correlation technique were

  11. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.

    Science.gov (United States)

    Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann

    2017-04-01

    Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams

  12. Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images.

    Science.gov (United States)

    Vermeer, K A; van der Schoot, J; Lemij, H G; de Boer, J F

    2011-06-01

    Current OCT devices provide three-dimensional (3D) in-vivo images of the human retina. The resulting very large data sets are difficult to manually assess. Automated segmentation is required to automatically process the data and produce images that are clinically useful and easy to interpret. In this paper, we present a method to segment the retinal layers in these images. Instead of using complex heuristics to define each layer, simple features are defined and machine learning classifiers are trained based on manually labeled examples. When applied to new data, these classifiers produce labels for every pixel. After regularization of the 3D labeled volume to produce a surface, this results in consistent, three-dimensionally segmented layers that match known retinal morphology. Six labels were defined, corresponding to the following layers: Vitreous, retinal nerve fiber layer (RNFL), ganglion cell layer & inner plexiform layer, inner nuclear layer & outer plexiform layer, photoreceptors & retinal pigment epithelium and choroid. For both normal and glaucomatous eyes that were imaged with a Spectralis (Heidelberg Engineering) OCT system, the five resulting interfaces were compared between automatic and manual segmentation. RMS errors for the top and bottom of the retina were between 4 and 6 μm, while the errors for intra-retinal interfaces were between 6 and 15 μm. The resulting total retinal thickness maps corresponded with known retinal morphology. RNFL thickness maps were compared to GDx (Carl Zeiss Meditec) thickness maps. Both maps were mostly consistent but local defects were better visualized in OCT-derived thickness maps.

  13. Identification and red blood cell automated counting from blood smear images using computer-aided system.

    Science.gov (United States)

    Acharya, Vasundhara; Kumar, Preetham

    2018-03-01

    Red blood cell count plays a vital role in identifying the overall health of the patient. Hospitals use the hemocytometer to count the blood cells. Conventional method of placing the smear under microscope and counting the cells manually lead to erroneous results, and medical laboratory technicians are put under stress. A computer-aided system will help to attain precise results in less amount of time. This research work proposes an image-processing technique for counting the number of red blood cells. It aims to examine and process the blood smear image, in order to support the counting of red blood cells and identify the number of normal and abnormal cells in the image automatically. K-medoids algorithm which is robust to external noise is used to extract the WBCs from the image. Granulometric analysis is used to separate the red blood cells from the white blood cells. The red blood cells obtained are counted using the labeling algorithm and circular Hough transform. The radius range for the circle-drawing algorithm is estimated by computing the distance of the pixels from the boundary which automates the entire algorithm. A comparison is done between the counts obtained using the labeling algorithm and circular Hough transform. Results of the work showed that circular Hough transform was more accurate in counting the red blood cells than the labeling algorithm as it was successful in identifying even the overlapping cells. The work also intends to compare the results of cell count done using the proposed methodology and manual approach. The work is designed to address all the drawbacks of the previous research work. The research work can be extended to extract various texture and shape features of abnormal cells identified so that diseases like anemia of inflammation and chronic disease can be detected at the earliest.

  14. Automated local bright feature image analysis of nuclear proteindistribution identifies changes in tissue phenotype

    Energy Technology Data Exchange (ETDEWEB)

    Knowles, David; Sudar, Damir; Bator, Carol; Bissell, Mina

    2006-02-01

    The organization of nuclear proteins is linked to cell and tissue phenotypes. When cells arrest proliferation, undergo apoptosis, or differentiate, the distribution of nuclear proteins changes. Conversely, forced alteration of the distribution of nuclear proteins modifies cell phenotype. Immunostaining and fluorescence microscopy have been critical for such findings. However, there is an increasing need for quantitative analysis of nuclear protein distribution to decipher epigenetic relationships between nuclear structure and cell phenotype, and to unravel the mechanisms linking nuclear structure and function. We have developed imaging methods to quantify the distribution of fluorescently-stained nuclear protein NuMA in different mammary phenotypes obtained using three-dimensional cell culture. Automated image segmentation of DAPI-stained nuclei was generated to isolate thousands of nuclei from three-dimensional confocal images. Prominent features of fluorescently-stained NuMA were detected using a novel local bright feature analysis technique, and their normalized spatial density calculated as a function of the distance from the nuclear perimeter to its center. The results revealed marked changes in the distribution of the density of NuMA bright features as non-neoplastic cells underwent phenotypically normal acinar morphogenesis. In contrast, we did not detect any reorganization of NuMA during the formation of tumor nodules by malignant cells. Importantly, the analysis also discriminated proliferating non-neoplastic cells from proliferating malignant cells, suggesting that these imaging methods are capable of identifying alterations linked not only to the proliferation status but also to the malignant character of cells. We believe that this quantitative analysis will have additional applications for classifying normal and pathological tissues.

  15. Primary histologic diagnosis using automated whole slide imaging: a validation study

    Directory of Open Access Journals (Sweden)

    Jukic Drazen M

    2006-04-01

    Full Text Available Abstract Background Only prototypes 5 years ago, high-speed, automated whole slide imaging (WSI systems (also called digital slide systems, virtual microscopes or wide field imagers are becoming increasingly capable and robust. Modern devices can capture a slide in 5 minutes at spatial sampling periods of less than 0.5 micron/pixel. The capacity to rapidly digitize large numbers of slides should eventually have a profound, positive impact on pathology. It is important, however, that pathologists validate these systems during development, not only to identify their limitations but to guide their evolution. Methods Three pathologists fully signed out 25 cases representing 31 parts. The laboratory information system was used to simulate real-world sign-out conditions including entering a full diagnostic field and comment (when appropriate and ordering special stains and recuts. For each case, discrepancies between diagnoses were documented by committee and a "consensus" report was formed and then compared with the microscope-based, sign-out report from the clinical archive. Results In 17 of 25 cases there were no discrepancies between the individual study pathologist reports. In 8 of the remaining cases, there were 12 discrepancies, including 3 in which image quality could be at least partially implicated. When the WSI consensus diagnoses were compared with the original sign-out diagnoses, no significant discrepancies were found. Full text of the pathologist reports, the WSI consensus diagnoses, and the original sign-out diagnoses are available as an attachment to this publication. Conclusion The results indicated that the image information contained in current whole slide images is sufficient for pathologists to make reliable diagnostic decisions and compose complex diagnostic reports. This is a very positive result; however, this does not mean that WSI is as good as a microscope. Virtually every slide had focal areas in which image quality (focus

  16. Automated vessel shadow segmentation of fovea-centered spectral-domain images from multiple OCT devices

    Science.gov (United States)

    Wu, Jing; Gerendas, Bianca S.; Waldstein, Sebastian M.; Simader, Christian; Schmidt-Erfurth, Ursula

    2014-03-01

    Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high reso- lution, three-dimensional (3D) cross sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD) and glaucoma.1 Disease diagnosis, assessment, and treatment requires a patient to undergo multiple OCT scans, possibly using different scanning devices, to accurately and precisely gauge disease activity, progression and treatment success. However, the use of OCT imaging devices from different vendors, combined with patient movement may result in poor scan spatial correlation, potentially leading to incorrect patient diagnosis or treatment analysis. Image registration can be used to precisely compare disease states by registering differing 3D scans to one another. In order to align 3D scans from different time- points and vendors using registration, landmarks are required, the most obvious being the retinal vasculature. Presented here is a fully automated cross-vendor method to acquire retina vessel locations for OCT registration from fovea centred 3D SD-OCT scans based on vessel shadows. Noise filtered OCT scans are flattened based on vendor retinal layer segmentation, to extract the retinal pigment epithelium (RPE) layer of the retina. Voxel based layer profile analysis and k-means clustering is used to extract candidate vessel shadow regions from the RPE layer. In conjunction, the extracted RPE layers are combined to generate a projection image featuring all candidate vessel shadows. Image processing methods for vessel segmentation of the OCT constructed projection image are then applied to optimize the accuracy of OCT vessel shadow segmentation through the removal of false positive shadow regions such as those caused by exudates and cysts. Validation of segmented vessel shadows uses

  17. Automated analysis of images acquired with electronic portal imaging device during delivery of quality assurance plans for inversely optimized arc therapy

    DEFF Research Database (Denmark)

    Fredh, Anna; Korreman, Stine; Rosenschöld, Per Munck af

    2010-01-01

    This work presents an automated method for comprehensively analyzing EPID images acquired for quality assurance of RapidArc treatment delivery. In-house-developed software has been used for the analysis and long-term results from measurements on three linacs are presented....

  18. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Mei Zhan

    2015-04-01

    Full Text Available Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM. These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a

  19. Image Features Based on Characteristic Curves and Local Binary Patterns for Automated HER2 Scoring

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Mukundan

    2018-02-01

    Full Text Available This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been to keep the computational complexity to the minimum possible level and to use simple, yet robust feature descriptors that can provide accurate classification of the slides. We propose two types of feature descriptors that encode important information about staining patterns and the percentage of staining present in ImmunoHistoChemistry (IHC-stained slides. The first descriptor is called a characteristic curve, which is a smooth non-increasing curve that represents the variation of percentage of staining with saturation levels. The second new descriptor introduced in this paper is a local binary pattern (LBP feature curve, which is also a non-increasing smooth curve that represents the local texture of the staining patterns. Both descriptors show excellent interclass variance and intraclass correlation and are suitable for the design of automatic HER2 classification algorithms. This paper gives the detailed theoretical aspects of the feature descriptors and also provides experimental results and a comparative analysis.

  20. Quantitative measurements of human sperm nuclei using automated microscopy and image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wyrobek, A.J.; Firpo, M. (Lawrence Livermore National Lab., CA (United States)); Sudar, D. (Univ. of California, San Francisco (United States))

    1993-01-01

    A package of computer codes, called Morphometry Automation Program (MAP), was developed to (a) detect human sperm smeared onto glass slides, (b) measure more than 30 aspects of the size, shape, texture, and staining of their nuclei, and (c) retain operator evaluation of the process. MAP performs the locating and measurement functions automatically, without operator assistance. In addition to standard measurements, MAP utilizes axial projections of nuclear area and stain intensity to detect asymmetries. MAP also stores for each cell the gray-scale images for later display and evaluation, and it retains coordinates for optional relocation and inspection under the microscope. MAP operates on the Quantitative Image Processing System (QUIPS) at LLNL. MAP has potential applications in the evaluation of infertility and in reproductive toxicology, such as (a) classifying sperm into clinical shape categories for assessing fertility status, (b) identifying subtle effects of host factors (diet, stress, etc.), (c) assessing the risk of potential spermatogenic toxicants (tobacco, drugs, etc.), and (d) investigating associations with abnormal pregnancy outcomes (time to pregnancy, early fetal loss, etc.).

  1. Automated imaging of cellular spheroids with selective plane illumination microscopy on a chip (Conference Presentation)

    Science.gov (United States)

    Paiè, Petra; Bassi, Andrea; Bragheri, Francesca; Osellame, Roberto

    2017-02-01

    Selective plane illumination microscopy (SPIM) is an optical sectioning technique that allows imaging of biological samples at high spatio-temporal resolution. Standard SPIM devices require dedicated set-ups, complex sample preparation and accurate system alignment, thus limiting the automation of the technique, its accessibility and throughput. We present a millimeter-scaled optofluidic device that incorporates selective plane illumination and fully automatic sample delivery and scanning. To this end an integrated cylindrical lens and a three-dimensional fluidic network were fabricated by femtosecond laser micromachining into a single glass chip. This device can upgrade any standard fluorescence microscope to a SPIM system. We used SPIM on a CHIP to automatically scan biological samples under a conventional microscope, without the need of any motorized stage: tissue spheroids expressing fluorescent proteins were flowed in the microchannel at constant speed and their sections were acquired while passing through the light sheet. We demonstrate high-throughput imaging of the entire sample volume (with a rate of 30 samples/min), segmentation and quantification in thick (100-300 μm diameter) cellular spheroids. This optofluidic device gives access to SPIM analyses to non-expert end-users, opening the way to automatic and fast screening of a high number of samples at subcellular resolution.

  2. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    Science.gov (United States)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  3. Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images.

    Science.gov (United States)

    Frasconi, Paolo; Silvestri, Ludovico; Soda, Paolo; Cortini, Roberto; Pavone, Francesco S; Iannello, Giulio

    2014-09-01

    Recently, confocal light sheet microscopy has enabled high-throughput acquisition of whole mouse brain 3D images at the micron scale resolution. This poses the unprecedented challenge of creating accurate digital maps of the whole set of cells in a brain. We introduce a fast and scalable algorithm for fully automated cell identification. We obtained the whole digital map of Purkinje cells in mouse cerebellum consisting of a set of 3D cell center coordinates. The method is accurate and we estimated an F1 measure of 0.96 using 56 representative volumes, totaling 1.09 GVoxel and containing 4138 manually annotated soma centers. Source code and its documentation are available at http://bcfind.dinfo.unifi.it/. The whole pipeline of methods is implemented in Python and makes use of Pylearn2 and modified parts of Scikit-learn. Brain images are available on request. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. AUTOMATED DATA PRODUCTION FOR A NOVEL AIRBORNE MULTIANGLE SPECTROPOLARIMETRIC IMAGER (AIRMSPI

    Directory of Open Access Journals (Sweden)

    V. M. Jovanovic

    2012-07-01

    Full Text Available A novel polarimetric imaging technique making use of rapid retardance modulation has been developed by JPL as a part of NASA's Instrument Incubator Program. It has been built into the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI under NASA's Airborne Instrument Technology Transition Program, and is aimed primarily at remote sensing of the amounts and microphysical properties of aerosols and clouds. AirMSPI includes an 8-band (355, 380, 445, 470, 555, 660, 865, 935 nm pushbroom camera that measures polarization in a subset of the bands (470, 660, and 865 nm. The camera is mounted on a gimbal and acquires imagery in a configurable set of along-track viewing angles ranging between +67°and –67° relative to nadir. As a result, near simultaneous multi-angle, multi-spectral, and polarimetric measurements of the targeted areas at a spatial resolution ranging from 7 m to 20 m (depending on the viewing angle can be derived. An automated data production system is being built to support high data acquisition rate in concert with co-registration and orthorectified mapping requirements. To date, a number of successful engineering checkout flights were conducted in October 2010, August-September 2011, and January 2012. Data products resulting from these flights will be presented.

  5. Automated Waterline Detection in the Wadden Sea Using High-Resolution TerraSAR-X Images

    Directory of Open Access Journals (Sweden)

    Stefan Wiehle

    2015-01-01

    Full Text Available We present an algorithm for automatic detection of the land-water-line from TerraSAR-X images acquired over the Wadden Sea. In this coastal region of the southeastern North Sea, a strip of up to 20 km of seabed falls dry during low tide, revealing mudflats and tidal creeks. The tidal currents transport sediments and can change the coastal shape with erosion rates of several meters per month. This rate can be strongly increased by storm surges which also cause flooding of usually dry areas. Due to the high number of ships traveling through the Wadden Sea to the largest ports of Germany, frequent monitoring of the bathymetry is also an important task for maritime security. For such an extended area and the required short intervals of a few months, only remote sensing methods can perform this task efficiently. Automating the waterline detection in weather-independent radar images provides a fast and reliable way to spot changes in the coastal topography. The presented algorithm first performs smoothing, brightness thresholding, and edge detection. In the second step, edge drawing and flood filling are iteratively performed to determine optimal thresholds for the edge drawing. In the last step, small misdetections are removed.

  6. Automated analysis for early signs of cerebral infarctions on brain X-ray CT images

    International Nuclear Information System (INIS)

    Oshima, Kazuki; Hara, Takeshi; Zhou, X.; Muramatsu, Chisako; Fujita, Hiroshi; Sakashita, Keiji

    2010-01-01

    t-PA (tissue plasminogen activator) thrombolysis is an effective clinical treatment for the acute cerebral infarction by breakdown to blood clots. However there is a risk of hemorrhage with its use. The guideline of the treatment is denying cerebral hemorrhage and widespread Early CT sign (ECS) on CT images. In this study, we analyzed the CT value of normal brain and ECS with normal brain model by comparing patient brain CT scan with a statistical normal model. Our method has constructed normal brain models consisted of 60 normal brain X-ray CT images. We calculated Z-score based on statistical model for 16 cases of cerebral infarction with ECS, 3 cases of cerebral infarction without ECS, and 25 cases of normal brain. The results of statistical analysis showed that there was a statistically significant difference between control and abnormal groups. This result implied that the automated detection scheme for ECS by using Z-score would be a possible application for brain computer-aided diagnosis (CAD). (author)

  7. Automated processing of thermal infrared images of Osservatorio Vesuviano permanent surveillance network by using Matlab code

    Science.gov (United States)

    Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa

    2017-04-01

    The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a

  8. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  9. Automated assessment of diabetic retinopathy severity using content-based image retrieval in multimodal fundus photographs.

    Science.gov (United States)

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Bekri, Lynda; Daccache, Wissam; Roux, Christian; Cochener, Béatrice

    2011-10-21

    Recent studies on diabetic retinopathy (DR) screening in fundus photographs suggest that disagreements between algorithms and clinicians are now comparable to disagreements among clinicians. The purpose of this study is to (1) determine whether this observation also holds for automated DR severity assessment algorithms, and (2) show the interest of such algorithms in clinical practice. A dataset of 85 consecutive DR examinations (168 eyes, 1176 multimodal eye fundus photographs) was collected at Brest University Hospital (Brest, France). Two clinicians with different experience levels determined DR severity in each eye, according to the International Clinical Diabetic Retinopathy Disease Severity (ICDRS) scale. Based on Cohen's kappa (κ) measurements, the performance of clinicians at assessing DR severity was compared to the performance of state-of-the-art content-based image retrieval (CBIR) algorithms from our group. At assessing DR severity in each patient, intraobserver agreement was κ = 0.769 for the most experienced clinician. Interobserver agreement between clinicians was κ = 0.526. Interobserver agreement between the most experienced clinicians and the most advanced algorithm was κ = 0.592. Besides, the most advanced algorithm was often able to predict agreements and disagreements between clinicians. Automated DR severity assessment algorithms, trained to imitate experienced clinicians, can be used to predict when young clinicians would agree or disagree with their more experienced fellow members. Such algorithms may thus be used in clinical practice to help validate or invalidate their diagnoses. CBIR algorithms, in particular, may also be used for pooling diagnostic knowledge among peers, with applications in training and coordination of clinicians' prescriptions.

  10. Automated Recognition of Vegetation and Water Bodies on the Territory of Megacities in Satellite Images of Visible and IR Bands

    Science.gov (United States)

    Mozgovoy, Dmitry k.; Hnatushenko, Volodymyr V.; Vasyliev, Volodymyr V.

    2018-04-01

    Vegetation and water bodies are a fundamental element of urban ecosystems, and water mapping is critical for urban and landscape planning and management. A methodology of automated recognition of vegetation and water bodies on the territory of megacities in satellite images of sub-meter spatial resolution of the visible and IR bands is proposed. By processing multispectral images from the satellite SuperView-1A, vector layers of recognized plant and water objects were obtained. Analysis of the results of image processing showed a sufficiently high accuracy of the delineation of the boundaries of recognized objects and a good separation of classes. The developed methodology provides a significant increase of the efficiency and reliability of updating maps of large cities while reducing financial costs. Due to the high degree of automation, the proposed methodology can be implemented in the form of a geo-information web service functioning in the interests of a wide range of public services and commercial institutions.

  11. Acquiring and preprocessing leaf images for automated plant identification: understanding the tradeoff between effort and information gain

    Directory of Open Access Journals (Sweden)

    Michael Rzanny

    2017-11-01

    Full Text Available Abstract Background Automated species identification is a long term research subject. Contrary to flowers and fruits, leaves are available throughout most of the year. Offering margin and texture to characterize a species, they are the most studied organ for automated identification. Substantially matured machine learning techniques generate the need for more training data (aka leaf images. Researchers as well as enthusiasts miss guidance on how to acquire suitable training images in an efficient way. Methods In this paper, we systematically study nine image types and three preprocessing strategies. Image types vary in terms of in-situ image recording conditions: perspective, illumination, and background, while the preprocessing strategies compare non-preprocessed, cropped, and segmented images to each other. Per image type-preprocessing combination, we also quantify the manual effort required for their implementation. We extract image features using a convolutional neural network, classify species using the resulting feature vectors and discuss classification accuracy in relation to the required effort per combination. Results The most effective, non-destructive way to record herbaceous leaves is to take an image of the leaf’s top side. We yield the highest classification accuracy using destructive back light images, i.e., holding the plucked leaf against the sky for image acquisition. Cropping the image to the leaf’s boundary substantially improves accuracy, while precise segmentation yields similar accuracy at a substantially higher effort. The permanent use or disuse of a flash light has negligible effects. Imaging the typically stronger textured backside of a leaf does not result in higher accuracy, but notably increases the acquisition cost. Conclusions In conclusion, the way in which leaf images are acquired and preprocessed does have a substantial effect on the accuracy of the classifier trained on them. For the first time, this

  12. Acquiring and preprocessing leaf images for automated plant identification: understanding the tradeoff between effort and information gain.

    Science.gov (United States)

    Rzanny, Michael; Seeland, Marco; Wäldchen, Jana; Mäder, Patrick

    2017-01-01

    Automated species identification is a long term research subject. Contrary to flowers and fruits, leaves are available throughout most of the year. Offering margin and texture to characterize a species, they are the most studied organ for automated identification. Substantially matured machine learning techniques generate the need for more training data (aka leaf images). Researchers as well as enthusiasts miss guidance on how to acquire suitable training images in an efficient way. In this paper, we systematically study nine image types and three preprocessing strategies. Image types vary in terms of in-situ image recording conditions: perspective, illumination, and background, while the preprocessing strategies compare non-preprocessed, cropped, and segmented images to each other. Per image type-preprocessing combination, we also quantify the manual effort required for their implementation. We extract image features using a convolutional neural network, classify species using the resulting feature vectors and discuss classification accuracy in relation to the required effort per combination. The most effective, non-destructive way to record herbaceous leaves is to take an image of the leaf's top side. We yield the highest classification accuracy using destructive back light images, i.e., holding the plucked leaf against the sky for image acquisition. Cropping the image to the leaf's boundary substantially improves accuracy, while precise segmentation yields similar accuracy at a substantially higher effort. The permanent use or disuse of a flash light has negligible effects. Imaging the typically stronger textured backside of a leaf does not result in higher accuracy, but notably increases the acquisition cost. In conclusion, the way in which leaf images are acquired and preprocessed does have a substantial effect on the accuracy of the classifier trained on them. For the first time, this study provides a systematic guideline allowing researchers to spend

  13. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  14. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning.

    Science.gov (United States)

    Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang

    2017-11-13

    Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.

  15. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    Science.gov (United States)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then

  16. Application of an Automated Discharge Imaging System and LSPIV during Typhoon Events in Taiwan

    Directory of Open Access Journals (Sweden)

    Wei-Che Huang

    2018-03-01

    Full Text Available An automated discharge imaging system (ADIS, which is a non-intrusive and safe approach, was developed for measuring river flows during flash flood events. ADIS consists of dual cameras to capture complete surface images in the near and far fields. Surface velocities are accurately measured using the Large Scale Particle Image Velocimetry (LSPIV technique. The stream discharges are then obtained from the depth-averaged velocity (based upon an empirical velocity-index relationship and cross-section area. The ADIS was deployed at the Yu-Feng gauging station in Shimen Reservoir upper catchment, northern Taiwan. For a rigorous validation, surface velocity measurements were conducted using ADIS/LSPIV and other instruments. In terms of the averaged surface velocity, all of the measured results were in good agreement with small differences, i.e., 0.004 to 0.39 m/s and 0.023 to 0.345 m/s when compared to those from acoustic Doppler current profiler (ADCP and surface velocity radar (SVR, respectively. The ADIS/LSPIV was further applied to measure surface velocities and discharges during typhoon events (i.e., Chan-Hom, Soudelor, Goni, and Dujuan in 2015. The measured water level and surface velocity both showed rapid increases due to flash floods. The estimated discharges from ADIS/LSPIV and ADCP were compared, presenting good consistency with correlation coefficient R = 0.996 and normalized root mean square error NRMSE = 7.96%. The results of sensitivity analysis indicate that the components till (τ and roll (θ of the camera are most sensitive parameters to affect the surface velocity using ADIS/LSPIV. Overall, the ADIS based upon LSPIV technique effectively measures surface velocities for reliable estimations of river discharges during typhoon events.

  17. Automated Diabetic Retinopathy Image Assessment Software: Diagnostic Accuracy and Cost-Effectiveness Compared with Human Graders.

    Science.gov (United States)

    Tufail, Adnan; Rudisill, Caroline; Egan, Catherine; Kapetanakis, Venediktos V; Salas-Vega, Sebastian; Owen, Christopher G; Lee, Aaron; Louw, Vern; Anderson, John; Liew, Gerald; Bolter, Louis; Srinivas, Sowmya; Nittala, Muneeswar; Sadda, SriniVas; Taylor, Paul; Rudnicka, Alicja R

    2017-03-01

    With the increasing prevalence of diabetes, annual screening for diabetic retinopathy (DR) by expert human grading of retinal images is challenging. Automated DR image assessment systems (ARIAS) may provide clinically effective and cost-effective detection of retinopathy. We aimed to determine whether ARIAS can be safely introduced into DR screening pathways to replace human graders. Observational measurement comparison study of human graders following a national screening program for DR versus ARIAS. Retinal images from 20 258 consecutive patients attending routine annual diabetic eye screening between June 1, 2012, and November 4, 2013. Retinal images were manually graded following a standard national protocol for DR screening and were processed by 3 ARIAS: iGradingM, Retmarker, and EyeArt. Discrepancies between manual grades and ARIAS results were sent to a reading center for arbitration. Screening performance (sensitivity, false-positive rate) and diagnostic accuracy (95% confidence intervals of screening-performance measures) were determined. Economic analysis estimated the cost per appropriate screening outcome. Sensitivity point estimates (95% confidence intervals) of the ARIAS were as follows: EyeArt 94.7% (94.2%-95.2%) for any retinopathy, 93.8% (92.9%-94.6%) for referable retinopathy (human graded as either ungradable, maculopathy, preproliferative, or proliferative), 99.6% (97.0%-99.9%) for proliferative retinopathy; Retmarker 73.0% (72.0 %-74.0%) for any retinopathy, 85.0% (83.6%-86.2%) for referable retinopathy, 97.9% (94.9%-99.1%) for proliferative retinopathy. iGradingM classified all images as either having disease or being ungradable. EyeArt and Retmarker saved costs compared with manual grading both as a replacement for initial human grading and as a filter prior to primary human grading, although the latter approach was less cost-effective. Retmarker and EyeArt systems achieved acceptable sensitivity for referable retinopathy when compared

  18. Precision automation of cell type classification and sub-cellular fluorescence quantification from laser scanning confocal images

    Directory of Open Access Journals (Sweden)

    Hardy Craig Hall

    2016-02-01

    Full Text Available While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to 1 segment radial plant organs into individual cells, 2 classify cells into cell type categories based upon random forest classification, 3 divide each cell into sub-regions, and 4 quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.

  19. Automated gas bubble imaging at sea floor – a new method of in situ gas flux quantification

    Directory of Open Access Journals (Sweden)

    G. Bohrmann

    2010-06-01

    Full Text Available Photo-optical systems are common in marine sciences and have been extensively used in coastal and deep-sea research. However, due to technical limitations in the past photo images had to be processed manually or semi-automatically. Recent advances in technology have rapidly improved image recording, storage and processing capabilities which are used in a new concept of automated in situ gas quantification by photo-optical detection. The design for an in situ high-speed image acquisition and automated data processing system is reported ("Bubblemeter". New strategies have been followed with regards to back-light illumination, bubble extraction, automated image processing and data management. This paper presents the design of the novel method, its validation procedures and calibration experiments. The system will be positioned and recovered from the sea floor using a remotely operated vehicle (ROV. It is able to measure bubble flux rates up to 10 L/min with a maximum error of 33% for worst case conditions. The Bubblemeter has been successfully deployed at a water depth of 1023 m at the Makran accretionary prism offshore Pakistan during a research expedition with R/V Meteor in November 2007.

  20. An Automated Segmentation of R2* Iron-Overloaded Liver Images Using a Fuzzy C-Mean Clustering Scheme.

    Science.gov (United States)

    Saiviroonporn, Pairash; Korpraphong, Pornpim; Viprakasit, Vip; Krittayaphong, Rungroj

    2018-02-13

    The objectives of this study were to develop and test an automated segmentation of R2* iron-overloaded liver images using fuzzy c-mean (FCM) clustering and to evaluate the observer variations. Liver R2* images and liver iron concentration (LIC) maps of 660 thalassemia examinations were randomly separated into training (70%) and testing (30%) cohorts for development and evaluation purposes, respectively. Two-dimensional FCM used R2* images, and the LIC map was implemented to segment vessels from the parenchyma. Two automated FCM variables were investigated using new echo time and membership threshold selection criteria based on the FCM centroid distance and LIC levels, respectively. The new method was developed on a training cohort and compared with manual segmentation for segmentation accuracy and to a previous semiautomated method, and a semiautomated scheme was suggested to improve unsuccessful results. The automated variables found from the training cohort were assessed for their effectiveness in the testing cohort, both quantitatively and qualitatively (the latter by 2 abdominal radiologists using a grading method, with evaluations of observer variations). A segmentation error of less than 30% was considered to be a successful result in both cohorts, whereas, in the testing cohort, a good grade obtained from satisfactory automated results was considered a success. The centroid distance method has a segmentation accuracy comparable with the previous-best, semiautomated method. About 94% and 90% of the examinations in the training and testing cohorts were automatically segmented out successfully, respectively. The failed examinations were successfully segmented out with thresholding adjustment (3% and 8%) or by using alternative results from the previous 1-dimensional FCM method (3% and 2%) in the training and testing cohorts, respectively. There were no failed segmentation examinations in either cohort. The intraobserver and interobserver variabilities were

  1. Calibration of a semi-automated segmenting method for quantification of adipose tissue compartments from magnetic resonance images of mice.

    Science.gov (United States)

    Garteiser, Philippe; Doblas, Sabrina; Towner, Rheal A; Griffin, Timothy M

    2013-11-01

    To use an automated water-suppressed magnetic resonance imaging (MRI) method to objectively assess adipose tissue (AT) volumes in whole body and specific regional body components (subcutaneous, thoracic and peritoneal) of obese and lean mice. Water-suppressed MR images were obtained on a 7T, horizontal-bore MRI system in whole bodies (excluding head) of 26 week old male C57BL6J mice fed a control (10% kcal fat) or high-fat diet (60% kcal fat) for 20 weeks. Manual (outlined regions) versus automated (Gaussian fitting applied to threshold-weighted images) segmentation procedures were compared for whole body AT and regional AT volumes (i.e., subcutaneous, thoracic, and peritoneal). The AT automated segmentation method was compared to dual-energy X-ray (DXA) analysis. The average AT volumes for whole body and individual compartments correlated well between the manual outlining and the automated methods (R2>0.77, p<0.05). Subcutaneous, peritoneal, and total body AT volumes were increased 2-3 fold and thoracic AT volume increased more than 5-fold in diet-induced obese mice versus controls (p<0.05). MRI and DXA-based method comparisons were highly correlative (R2=0.94, p<0.0001). Automated AT segmentation of water-suppressed MRI data using a global Gaussian filtering algorithm resulted in a fairly accurate assessment of total and regional AT volumes in a pre-clinical mouse model of obesity. © 2013 Elsevier Inc. All rights reserved.

  2. Evaluation of an automated deformable image matching method for quantifying lung motion in respiration-correlated CT images

    International Nuclear Information System (INIS)

    Pevsner, A.; Davis, B.; Joshi, S.; Hertanto, A.; Mechalakos, J.; Yorke, E.; Rosenzweig, K.; Nehmeh, S.; Erdi, Y.E.; Humm, J.L.; Larson, S.; Ling, C.C.; Mageras, G.S.

    2006-01-01

    We have evaluated an automated registration procedure for predicting tumor and lung deformation based on CT images of the thorax obtained at different respiration phases. The method uses a viscous fluid model of tissue deformation to map voxels from one CT dataset to another. To validate the deformable matching algorithm we used a respiration-correlated CT protocol to acquire images at different phases of the respiratory cycle for six patients with nonsmall cell lung carcinoma. The position and shape of the deformable gross tumor volumes (GTV) at the end-inhale (EI) phase predicted by the algorithm was compared to those drawn by four observers. To minimize interobserver differences, all observers used the contours drawn by a single observer at end-exhale (EE) phase as a guideline to outline GTV contours at EI. The differences between model-predicted and observer-drawn GTV surfaces at EI, as well as differences between structures delineated by observers at EI (interobserver variations) were evaluated using a contour comparison algorithm written for this purpose, which determined the distance between the two surfaces along different directions. The mean and 90% confidence interval for model-predicted versus observer-drawn GTV surface differences over all patients and all directions were 2.6 and 5.1 mm, respectively, whereas the mean and 90% confidence interval for interobserver differences were 2.1 and 3.7 mm. We have also evaluated the algorithm's ability to predict normal tissue deformations by examining the three-dimensional (3-D) vector displacement of 41 landmarks placed by each observer at bronchial and vascular branch points in the lung between the EE and EI image sets (mean and 90% confidence interval displacements of 11.7 and 25.1 mm, respectively). The mean and 90% confidence interval discrepancy between model-predicted and observer-determined landmark displacements over all patients were 2.9 and 7.3 mm, whereas interobserver discrepancies were 2.8 and 6

  3. Cost effective raspberry pi-based radio frequency identification tagging of mice suitable for automated in vivo imaging.

    Science.gov (United States)

    Bolaños, Federico; LeDue, Jeff M; Murphy, Timothy H

    2017-01-30

    Automation of animal experimentation improves consistency, reduces potential for error while decreasing animal stress and increasing well-being. Radio frequency identification (RFID) tagging can identify individual mice in group housing environments enabling animal-specific tracking of physiological parameters. We describe a simple protocol to radio frequency identification (RFID) tag and detect mice. RFID tags were injected sub-cutaneously after brief isoflurane anesthesia and do not require surgical steps such as suturing or incisions. We employ glass-encapsulated 125kHz tags that can be read within 30.2±2.4mm of the antenna. A raspberry pi single board computer and tag reader enable automated logging and cross platform support is possible through Python. We provide sample software written in Python to provide a flexible and cost effective system for logging the weights of multiple mice in relation to pre-defined targets. The sample software can serve as the basis of any behavioral or physiological task where users will need to identify and track specific animals. Recently, we have applied this system of tagging to automated mouse brain imaging within home-cages. We provide a cost effective solution employing open source software to facilitate adoption in applications such as automated imaging or tracking individual animal weights during tasks where food or water restriction is employed as motivation for a specific behavior. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Comparison of Manual Mapping and Automated Object-Based Image Analysis of Non-Submerged Aquatic Vegetation from Very-High-Resolution UAS Images

    Directory of Open Access Journals (Sweden)

    Eva Husson

    2016-09-01

    Full Text Available Aquatic vegetation has important ecological and regulatory functions and should be monitored in order to detect ecosystem changes. Field data collection is often costly and time-consuming; remote sensing with unmanned aircraft systems (UASs provides aerial images with sub-decimetre resolution and offers a potential data source for vegetation mapping. In a manual mapping approach, UAS true-colour images with 5-cm-resolution pixels allowed for the identification of non-submerged aquatic vegetation at the species level. However, manual mapping is labour-intensive, and while automated classification methods are available, they have rarely been evaluated for aquatic vegetation, particularly at the scale of individual vegetation stands. We evaluated classification accuracy and time-efficiency for mapping non-submerged aquatic vegetation at three levels of detail at five test sites (100 m × 100 m differing in vegetation complexity. We used object-based image analysis and tested two classification methods (threshold classification and Random Forest using eCognition®. The automated classification results were compared to results from manual mapping. Using threshold classification, overall accuracy at the five test sites ranged from 93% to 99% for the water-versus-vegetation level and from 62% to 90% for the growth-form level. Using Random Forest classification, overall accuracy ranged from 56% to 94% for the growth-form level and from 52% to 75% for the dominant-taxon level. Overall classification accuracy decreased with increasing vegetation complexity. In test sites with more complex vegetation, automated classification was more time-efficient than manual mapping. This study demonstrated that automated classification of non-submerged aquatic vegetation from true-colour UAS images was feasible, indicating good potential for operative mapping of aquatic vegetation. When choosing the preferred mapping method (manual versus automated the desired level of

  5. Automated MALDI Matrix Coating System for Multiple Tissue Samples for Imaging Mass Spectrometry

    Science.gov (United States)

    Mounfield, William P.; Garrett, Timothy J.

    2012-03-01

    Uniform matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is key for reproducible analyte ion signals. Current methods often result in nonhomogenous matrix deposition, and take time and effort to produce acceptable ion signals. Here we describe a fully-automated method for matrix deposition using an enclosed spray chamber and spray nozzle for matrix solution delivery. A commercial air-atomizing spray nozzle was modified and combined with solenoid controlled valves and a Programmable Logic Controller (PLC) to control and deliver the matrix solution. A spray chamber was employed to contain the nozzle, sample, and atomized matrix solution stream, and to prevent any interference from outside conditions as well as allow complete control of the sample environment. A gravity cup was filled with MALDI matrix solutions, including DHB in chloroform/methanol (50:50) at concentrations up to 60 mg/mL. Various samples (including rat brain tissue sections) were prepared using two deposition methods (spray chamber, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed a uniform coating of matrix crystals across the sample. Overall, the mass spectral images gathered from tissues coated using the spray chamber system were of better quality and more reproducible than from tissue specimens prepared by the inkjet deposition method.

  6. Automated detection of breast cancer in resected specimens with fluorescence lifetime imaging

    Science.gov (United States)

    Phipps, Jennifer E.; Gorpas, Dimitris; Unger, Jakob; Darrow, Morgan; Bold, Richard J.; Marcu, Laura

    2018-01-01

    Re-excision rates for breast cancer lumpectomy procedures are currently nearly 25% due to surgeons relying on inaccurate or incomplete methods of evaluating specimen margins. The objective of this study was to determine if cancer could be automatically detected in breast specimens from mastectomy and lumpectomy procedures by a classification algorithm that incorporated parameters derived from fluorescence lifetime imaging (FLIm). This study generated a database of co-registered histologic sections and FLIm data from breast cancer specimens (N  =  20) and a support vector machine (SVM) classification algorithm able to automatically detect cancerous, fibrous, and adipose breast tissue. Classification accuracies were greater than 97% for automated detection of cancerous, fibrous, and adipose tissue from breast cancer specimens. The classification worked equally well for specimens scanned by hand or with a mechanical stage, demonstrating that the system could be used during surgery or on excised specimens. The ability of this technique to simply discriminate between cancerous and normal breast tissue, in particular to distinguish fibrous breast tissue from tumor, which is notoriously challenging for optical techniques, leads to the conclusion that FLIm has great potential to assess breast cancer margins. Identification of positive margins before waiting for complete histologic analysis could significantly reduce breast cancer re-excision rates.

  7. UAS imaging for automated crop lodging detection: a case study over an experimental maize field

    Science.gov (United States)

    Chu, Tianxing; Starek, Michael J.; Brewer, Michael J.; Masiane, Tiisetso; Murray, Seth C.

    2017-05-01

    Lodging has been recognized as one of the major destructive factors for crop quality and yield, particularly in corn. A variety of contributing causes, e.g. disease and/or pest, weather conditions, excessive nitrogen, and high plant density, may lead to lodging before harvesting season. Traditional lodging detection strategies mainly rely on ground data collection, which is insufficient in efficiency and accuracy. To address this problem, this research focuses on the use of unmanned aircraft systems (UAS) for automated detection of crop lodging. The study was conducted over an experimental corn field at the Texas A and M AgriLife Research and Extension Center at Corpus Christi, Texas, during the growing season of 2016. Nadir-view images of the corn field were taken by small UAS platforms equipped with consumer grade RGB and NIR cameras on a per week basis, enabling a timely observation of the plant growth. 3D structural information of the plants was reconstructed using structure-from-motion photogrammetry. The structural information was then applied to calculate crop height, and rates of growth. A lodging index for detecting corn lodging was proposed afterwards. Ground truth data of lodging was collected on a per row basis and used for fair assessment and tuning of the detection algorithm. Results show the UAS-measured height correlates well with the ground-measured height. More importantly, the lodging index can effectively reflect severity of corn lodging and yield after harvesting.

  8. Automated recognition of the pericardium contour on processed CT images using genetic algorithms.

    Science.gov (United States)

    Rodrigues, É O; Rodrigues, L O; Oliveira, L S N; Conci, A; Liatsis, P

    2017-08-01

    This work proposes the use of Genetic Algorithms (GA) in tracing and recognizing the pericardium contour of the human heart using Computed Tomography (CT) images. We assume that each slice of the pericardium can be modelled by an ellipse, the parameters of which need to be optimally determined. An optimal ellipse would be one that closely follows the pericardium contour and, consequently, separates appropriately the epicardial and mediastinal fats of the human heart. Tracing and automatically identifying the pericardium contour aids in medical diagnosis. Usually, this process is done manually or not done at all due to the effort required. Besides, detecting the pericardium may improve previously proposed automated methodologies that separate the two types of fat associated to the human heart. Quantification of these fats provides important health risk marker information, as they are associated with the development of certain cardiovascular pathologies. Finally, we conclude that GA offers satisfiable solutions in a feasible amount of processing time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Experimental saltwater intrusion in coastal aquifers using automated image analysis: Applications to homogeneous aquifers

    Science.gov (United States)

    Robinson, G.; Ahmed, Ashraf A.; Hamill, G. A.

    2016-07-01

    This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimising manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.

  10. A multiresolution approach to automated classification of protein subcellular location images

    Directory of Open Access Journals (Sweden)

    Srinivasa Gowri

    2007-06-01

    Full Text Available Abstract Background Fluorescence microscopy is widely used to determine the subcellular location of proteins. Efforts to determine location on a proteome-wide basis create a need for automated methods to analyze the resulting images. Over the past ten years, the feasibility of using machine learning methods to recognize all major subcellular location patterns has been convincingly demonstrated, using diverse feature sets and classifiers. On a well-studied data set of 2D HeLa single-cell images, the best performance to date, 91.5%, was obtained by including a set of multiresolution features. This demonstrates the value of multiresolution approaches to this important problem. Results We report here a novel approach for the classification of subcellular location patterns by classifying in multiresolution subspaces. Our system is able to work with any feature set and any classifier. It consists of multiresolution (MR decomposition, followed by feature computation and classification in each MR subspace, yielding local decisions that are then combined into a global decision. With 26 texture features alone and a neural network classifier, we obtained an increase in accuracy on the 2D HeLa data set to 95.3%. Conclusion We demonstrate that the space-frequency localized information in the multiresolution subspaces adds significantly to the discriminative power of the system. Moreover, we show that a vastly reduced set of features is sufficient, consisting of our novel modified Haralick texture features. Our proposed system is general, allowing for any combinations of sets of features and any combination of classifiers.

  11. Automated analysis of retinal images for detection of referable diabetic retinopathy.

    Science.gov (United States)

    Abràmoff, Michael D; Folk, James C; Han, Dennis P; Walker, Jonathan D; Williams, David F; Russell, Stephen R; Massin, Pascale; Cochener, Beatrice; Gain, Philippe; Tang, Li; Lamard, Mathieu; Moga, Daniela C; Quellec, Gwénolé; Niemeijer, Meindert

    2013-03-01

    The diagnostic accuracy of computer detection programs has been reported to be comparable to that of specialists and expert readers, but no computer detection programs have been validated in an independent cohort using an internationally recognized diabetic retinopathy (DR) standard. To determine the sensitivity and specificity of the Iowa Detection Program (IDP) to detect referable diabetic retinopathy (RDR). In primary care DR clinics in France, from January 1, 2005, through December 31, 2010, patients were photographed consecutively, and retinal color images were graded for retinopathy severity according to the International Clinical Diabetic Retinopathy scale and macular edema by 3 masked independent retinal specialists and regraded with adjudication until consensus. The IDP analyzed the same images at a predetermined and fixed set point. We defined RDR as more than mild nonproliferative retinopathy and/or macular edema. A total of 874 people with diabetes at risk for DR. Sensitivity and specificity of the IDP to detect RDR, area under the receiver operating characteristic curve, sensitivity and specificity of the retinal specialists' readings, and mean interobserver difference (κ). The RDR prevalence was 21.7% (95% CI, 19.0%-24.5%). The IDP sensitivity was 96.8% (95% CI, 94.4%-99.3%) and specificity was 59.4% (95% CI, 55.7%-63.0%), corresponding to 6 of 874 false-negative results (none met treatment criteria). The area under the receiver operating characteristic curve was 0.937 (95% CI, 0.916-0.959). Before adjudication and consensus, the sensitivity/specificity of the retinal specialists were 0.80/0.98, 0.71/1.00, and 0.91/0.95, and the mean intergrader κ was 0.822. The IDP has high sensitivity and specificity to detect RDR. Computer analysis of retinal photographs for DR and automated detection of RDR can be implemented safely into the DR screening pipeline, potentially improving access to screening and health care productivity and reducing visual loss

  12. Automated Synthesis of 18F-Fluoropropoxytryptophan for Amino Acid Transporter System Imaging

    Directory of Open Access Journals (Sweden)

    I-Hong Shih

    2014-01-01

    Full Text Available Objective. This study was to develop a cGMP grade of [18F]fluoropropoxytryptophan (18F-FTP to assess tryptophan transporters using an automated synthesizer. Methods. Tosylpropoxytryptophan (Ts-TP was reacted with K18F/kryptofix complex. After column purification, solvent evaporation, and hydrolysis, the identity and purity of the product were validated by radio-TLC (1M-ammonium acetate : methanol = 4 : 1 and HPLC (C-18 column, methanol : water = 7 : 3 analyses. In vitro cellular uptake of 18F-FTP and 18F-FDG was performed in human prostate cancer cells. PET imaging studies were performed with 18F-FTP and 18F-FDG in prostate and small cell lung tumor-bearing mice (3.7 MBq/mouse, iv. Results. Radio-TLC and HPLC analyses of 18F-FTP showed that the Rf and Rt values were 0.9 and 9 min, respectively. Radiochemical purity was >99%. The radiochemical yield was 37.7% (EOS 90 min, decay corrected. Cellular uptake of 18F-FTP and 18F-FDG showed enhanced uptake as a function of incubation time. PET imaging studies showed that 18F-FTP had less tumor uptake than 18F-FDG in prostate cancer model. However, 18F-FTP had more uptake than 18F-FDG in small cell lung cancer model. Conclusion. 18F-FTP could be synthesized with high radiochemical yield. Assessment of upregulated transporters activity by 18F-FTP may provide potential applications in differential diagnosis and prediction of early treatment response.

  13. A simple rapid process for semi-automated brain extraction from magnetic resonance images of the whole mouse head.

    Science.gov (United States)

    Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L

    2016-01-15

    Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Quantification of Eosinophilic Granule Protein Deposition in Biopsies of Inflammatory Skin Diseases by Automated Image Analysis of Highly Sensitive Immunostaining

    Directory of Open Access Journals (Sweden)

    Peter Kiehl

    1999-01-01

    Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.

  15. Automated quality assessment of structural magnetic resonance images in children: Comparison with visual inspection and surface-based reconstruction.

    Science.gov (United States)

    White, Tonya; Jansen, Philip R; Muetzel, Ryan L; Sudre, Gustavo; El Marroun, Hanan; Tiemeier, Henning; Qiu, Anqi; Shaw, Philip; Michael, Andrew M; Verhulst, Frank C

    2018-03-01

    Motion-related artifacts are one of the major challenges associated with pediatric neuroimaging. Recent studies have shown a relationship between visual quality ratings of T 1 images and cortical reconstruction measures. Automated algorithms offer more precision in quantifying movement-related artifacts compared to visual inspection. Thus, the goal of this study was to test three different automated quality assessment algorithms for structural MRI scans. The three algorithms included a Fourier-, integral-, and a gradient-based approach which were run on raw T 1 -weighted imaging data collected from four different scanners. The four cohorts included a total of 6,662 MRI scans from two waves of the Generation R Study, the NIH NHGRI Study, and the GUSTO Study. Using receiver operating characteristics with visually inspected quality ratings of the T 1 images, the area under the curve (AUC) for the gradient algorithm, which performed better than either the integral or Fourier approaches, was 0.95, 0.88, and 0.82 for the Generation R, NHGRI, and GUSTO studies, respectively. For scans of poor initial quality, repeating the scan often resulted in a better quality second image. Finally, we found that even minor differences in automated quality measurements were associated with FreeSurfer derived measures of cortical thickness and surface area, even in scans that were rated as good quality. Our findings suggest that the inclusion of automated quality assessment measures can augment visual inspection and may find use as a covariate in analyses or to identify thresholds to exclude poor quality data. © 2017 Wiley Periodicals, Inc.

  16. Automated image alignment and segmentation to follow progression of geographic atrophy in age-related macular degeneration.

    Science.gov (United States)

    Ramsey, David J; Sunness, Janet S; Malviya, Poorva; Applegate, Carol; Hager, Gregory D; Handa, James T

    2014-07-01

    To develop a computer-based image segmentation method for standardizing the quantification of geographic atrophy (GA). The authors present an automated image segmentation method based on the fuzzy c-means clustering algorithm for the detection of GA lesions. The method is evaluated by comparing computerized segmentation against outlines of GA drawn by an expert grader for a longitudinal series of fundus autofluorescence images with paired 30° color fundus photographs for 10 patients. The automated segmentation method showed excellent agreement with an expert grader for fundus autofluorescence images, achieving a performance level of 94 ± 5% sensitivity and 98 ± 2% specificity on a per-pixel basis for the detection of GA area, but performed less well on color fundus photographs with a sensitivity of 47 ± 26% and specificity of 98 ± 2%. The segmentation algorithm identified 75 ± 16% of the GA border correctly in fundus autofluorescence images compared with just 42 ± 25% for color fundus photographs. The results of this study demonstrate a promising computerized segmentation method that may enhance the reproducibility of GA measurement and provide an objective strategy to assist an expert in the grading of images.

  17. Automated quantification and sizing of unbranched filamentous cyanobacteria by model-based object-oriented image analysis.

    Science.gov (United States)

    Zeder, Michael; Van den Wyngaert, Silke; Köster, Oliver; Felder, Kathrin M; Pernthaler, Jakob

    2010-03-01

    Quantification and sizing of filamentous cyanobacteria in environmental samples or cultures are time-consuming and are often performed by using manual or semiautomated microscopic analysis. Automation of conventional image analysis is difficult because filaments may exhibit great variations in length and patchy autofluorescence. Moreover, individual filaments frequently cross each other in microscopic preparations, as deduced by modeling. This paper describes a novel approach based on object-oriented image analysis to simultaneously determine (i) filament number, (ii) individual filament lengths, and (iii) the cumulative filament length of unbranched cyanobacterial morphotypes in fluorescent microscope images in a fully automated high-throughput manner. Special emphasis was placed on correct detection of overlapping objects by image analysis and on appropriate coverage of filament length distribution by using large composite images. The method was validated with a data set for Planktothrix rubescens from field samples and was compared with manual filament tracing, the line intercept method, and the Utermöhl counting approach. The computer program described allows batch processing of large images from any appropriate source and annotation of detected filaments. It requires no user interaction, is available free, and thus might be a useful tool for basic research and drinking water quality control.

  18. Development and application of an automated analysis method for individual cerebral perfusion single photon emission tomography images

    CERN Document Server

    Cluckie, A J

    2001-01-01

    Neurological images may be analysed by performing voxel by voxel comparisons with a group of control subject images. An automated, 3D, voxel-based method has been developed for the analysis of individual single photon emission tomography (SPET) scans. Clusters of voxels are identified that represent regions of abnormal radiopharmaceutical uptake. Morphological operators are applied to reduce noise in the clusters, then quantitative estimates of the size and degree of the radiopharmaceutical uptake abnormalities are derived. Statistical inference has been performed using a Monte Carlo method that has not previously been applied to SPET scans, or for the analysis of individual images. This has been validated for group comparisons of SPET scans and for the analysis of an individual image using comparison with a group. Accurate statistical inference was obtained independent of experimental factors such as degrees of freedom, image smoothing and voxel significance level threshold. The analysis method has been eval...

  19. Fully automated segmentation of whole breast using dynamic programming in dynamic contrast enhanced MR images.

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Xiao, Qin; Gu, Yajia; Li, Qiang

    2017-06-01

    Amount of fibroglandular tissue (FGT) and level of background parenchymal enhancement (BPE) in breast dynamic contrast enhanced magnetic resonance images (DCE-MRI) are suggested as strong indices for assessing breast cancer risk. Whole breast segmentation is the first important task for quantitative analysis of FGT and BPE in three-dimensional (3-D) DCE-MRI. The purpose of this study is to develop and evaluate a fully automated technique for accurate segmentation of the whole breast in 3-D fat-suppressed DCE-MRI. The whole breast segmentation consisted of two major steps, i.e., the delineation of chest wall line and breast skin line. First, a sectional dynamic programming method was employed to trace the upper and/or lower boundaries of the chest wall by use of the positive and/or negative gradient within a band along the chest wall in each 2-D slice. Second, another dynamic programming was applied to delineate the skin-air boundary slice-by-slice based on the saturated gradient of the enhanced image obtained with the prior statistical distribution of gray levels of the breast skin line. Starting from the central slice, these two steps employed a Gaussian function to limit the search range of boundaries in adjacent slices based on the continuity of chest wall line and breast skin line. Finally, local breast skin line detection was applied around armpit to complete the whole breast segmentation. The method was validated with a representative dataset of 100 3-D breast DCE-MRI scans through objective quantification and subjective evaluation. The MR scans in the dataset were acquired with four MR scanners in five spatial resolutions. The cases were assessed with four breast density ratings by radiologists based on Breast Imaging Reporting and Data System (BI-RADS) of American College of Radiology. Our segmentation algorithm achieved a Dice volume overlap measure of 95.8 ± 1.2% and volume difference measure of 8.4 ± 2.4% between the automatically and manually

  20. A study of whether automated Diabetic Retinopathy Image Assessment could replace manual grading steps in the English National Screening Programme.

    Science.gov (United States)

    Kapetanakis, Venediktos V; Rudnicka, Alicja R; Liew, Gerald; Owen, Christopher G; Lee, Aaron; Louw, Vern; Bolter, Louis; Anderson, John; Egan, Catherine; Salas-Vega, Sebastian; Rudisill, Caroline; Taylor, Paul; Tufail, Adnan

    2015-09-01

    Diabetic retinopathy screening in England involves labour intensive manual grading of digital retinal images. We present the plan for an observational retrospective study of whether automated systems could replace one or more steps of human grading. Patients aged 12 or older who attended the Diabetes Eye Screening programme, Homerton University Hospital (London) between 1 June 2012 and 4 November 2013 had macular and disc-centred retinal images taken. All screening episodes were manually graded and will additionally be graded by three automated systems. Each system will process all screening episodes, and screening performance (sensitivity, false positive rate, likelihood ratios) and diagnostic accuracy (95% confidence intervals of screening performance measures) will be quantified. A sub-set of gradings will be validated by an approved Reading Centre. Additional analyses will explore the effect of altering thresholds for disease detection within each automated system on screening performance. 2,782/20,258 diabetes patients were referred to ophthalmologists for further examination. Prevalence of maculopathy (M1), pre-proliferative retinopathy (R2), and proliferative retinopathy (R3) were 7.9%, 3.1% and 1.2%, respectively; 4749 (23%) patients were diagnosed with background retinopathy (R1); 1.5% were considered ungradable by human graders. Retinopathy prevalence was similar to other English diabetic screening programmes, so findings should be generalizable. The study population size will allow the detection of differences in screening performance between the human and automated grading systems as small as 2%. The project will compare performance and economic costs of manual versus automated systems. © The Author(s) 2015.

  1. CT angiography for planning transcatheter aortic valve replacement using automated tube voltage selection: Image quality and radiation exposure

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, Stefanie [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Department of Diagnostic and Interventional Radiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); De Cecco, Carlo N. [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Division of Cardiology, Department of Medicine, Medical University of South Carolina, Charleston, SC (United States); Kuhlman, Taylor S.; Varga-Szemes, Akos [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Caruso, Damiano [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Department of Radiological Sciences, Oncology and Pathology, University of Rome “Sapienza”, Rome (Italy); Duguay, Taylor M. [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Tesche, Christian [Department of Cardiology, Heart Centre Munich-Bogenhausen, Munich (Germany); Vogl, Thomas J. [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt (Germany); Nikolaou, Konstantin [Department of Diagnostic and Interventional Radiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); and others

    2017-01-15

    Highlights: • TAVR-planning CT was performed with automated tube voltage selection. • Automated tube voltage selection enables individual tube voltage adaptation. • Image quality was diagnostic while radiation exposure was significantly decreased. - Abstract: Purpose: To assess image quality and accuracy of CT angiography (CTA) for transcatheter aortic valve replacement (TAVR) planning performed with 3rd generation dual-source CT (DSCT). Material and methods: We evaluated 125 patients who underwent TAVR-planning CTA on 3rd generation DSCT. A two-part protocol was performed including retrospectively ECG-gated coronary CTA (CCTA) and prospectively ECG-triggered aortoiliac CTA using 60 mL of contrast medium. Automated tube voltage selection and advanced iterative reconstruction were applied. Effective dose (ED), signal-to-noise (SNR) and contrast-to-noise ratios (CNR) were calculated. Five-point scales were used for subjective image quality analysis. In patients who underwent TAVR, sizing parameters were obtained. Results: Image quality was rated good to excellent in 97.6% of CCTA and 100% of aortoiliac CTAs. CTA studies at >100 kV showed decreased objective image quality compared to 70–100 kV (SNR, all p ≤ 0.0459; CNR, all p ≤ 0.0462). Mean ED increased continuously from 70 to >100 kV (CCTA: 4.5 ± 1.7 mSv–13.6 ± 2.9 mSv, all p ≤ 0.0233; aortoiliac CTA: 2.4 ± 0.9 mSv–6.8 ± 2.7 mSv, all p ≤ 0.0414). In 39 patients TAVR was performed and annulus diameter was within the recommended range in all patients. No severe cardiac or vascular complications were noted. Conclusion: 3rd generation DSCT provides diagnostic image quality in TAVR-planning CTA and facilitates reliable assessment of TAVR device and delivery option while reducing radiation dose.

  2. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy

    International Nuclear Information System (INIS)

    Menten, Martin J.; Fast, Martin F.; Nill, Simeon; Oelfke, Uwe

    2015-01-01

    Purpose: Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. Methods: kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated by weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Results: Regular dual-energy imaging was able to increase tracking accuracy in left–right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. Conclusions: This study has highlighted the influence of

  3. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy.

    Science.gov (United States)

    Menten, Martin J; Fast, Martin F; Nill, Simeon; Oelfke, Uwe

    2015-12-01

    Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated by weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Regular dual-energy imaging was able to increase tracking accuracy in left-right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. This study has highlighted the influence of patient anatomy on the success rate of real

  4. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Menten, Martin J., E-mail: martin.menten@icr.ac.uk; Fast, Martin F.; Nill, Simeon; Oelfke, Uwe, E-mail: uwe.oelfke@icr.ac.uk [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom)

    2015-12-15

    Purpose: Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. Methods: kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated by weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Results: Regular dual-energy imaging was able to increase tracking accuracy in left–right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. Conclusions: This study has highlighted the influence of

  5. CUDA-based acceleration and BPN-assisted automation of bilateral filtering for brain MR image restoration.

    Science.gov (United States)

    Chang, Herng-Hua; Chang, Yu-Ning

    2017-04-01

    Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average

  6. An image analysis pipeline for automated classification of imaging light conditions and for quantification of wheat canopy cover time series in field phenotyping.

    Science.gov (United States)

    Yu, Kang; Kirchgessner, Norbert; Grieder, Christoph; Walter, Achim; Hund, Andreas

    2017-01-01

    Robust segmentation of canopy cover (CC) from large amounts of images taken under different illumination/light conditions in the field is essential for high throughput field phenotyping (HTFP). We attempted to address this challenge by evaluating different vegetation indices and segmentation methods for analyzing images taken at varying illuminations throughout the early growth phase of wheat in the field. 40,000 images taken on 350 wheat genotypes in two consecutive years were assessed for this purpose. We proposed an image analysis pipeline that allowed for image segmentation using automated thresholding and machine learning based classification methods and for global quality control of the resulting CC time series. This pipeline enabled accurate classification of imaging light conditions into two illumination scenarios, i.e. high light-contrast (HLC) and low light-contrast (LLC), in a series of continuously collected images by employing a support vector machine (SVM) model. Accordingly, the scenario-specific pixel-based classification models employing decision tree and SVM algorithms were able to outperform the automated thresholding methods, as well as improved the segmentation accuracy compared to general models that did not discriminate illumination differences. The three-band vegetation difference index (NDI3) was enhanced for segmentation by incorporating the HSV-V and the CIE Lab-a color components, i.e. the product images NDI3*V and NDI3*a. Field illumination scenarios can be successfully identified by the proposed image analysis pipeline, and the illumination-specific image segmentation can improve the quantification of CC development. The integrated image analysis pipeline proposed in this study provides great potential for automatically delivering robust data in HTFP.

  7. Automated Segmentation of Light-Sheet Fluorescent Imaging to Characterize Experimental Doxorubicin-Induced Cardiac Injury and Repair.

    Science.gov (United States)

    Packard, René R Sevag; Baek, Kyung In; Beebe, Tyler; Jen, Nelson; Ding, Yichen; Shi, Feng; Fei, Peng; Kang, Bong Jin; Chen, Po-Heng; Gau, Jonathan; Chen, Michael; Tang, Jonathan Y; Shih, Yu-Huan; Ding, Yonghe; Li, Debiao; Xu, Xiaolei; Hsiai, Tzung K

    2017-08-17

    This study sought to develop an automated segmentation approach based on histogram analysis of raw axial images acquired by light-sheet fluorescent imaging (LSFI) to establish rapid reconstruction of the 3-D zebrafish cardiac architecture in response to doxorubicin-induced injury and repair. Input images underwent a 4-step automated image segmentation process consisting of stationary noise removal, histogram equalization, adaptive thresholding, and image fusion followed by 3-D reconstruction. We applied this method to 3-month old zebrafish injected intraperitoneally with doxorubicin followed by LSFI at 3, 30, and 60 days post-injection. We observed an initial decrease in myocardial and endocardial cavity volumes at day 3, followed by ventricular remodeling at day 30, and recovery at day 60 (P < 0.05, n = 7-19). Doxorubicin-injected fish developed ventricular diastolic dysfunction and worsening global cardiac function evidenced by elevated E/A ratios and myocardial performance indexes quantified by pulsed-wave Doppler ultrasound at day 30, followed by normalization at day 60 (P < 0.05, n = 9-20). Treatment with the γ-secretase inhibitor, DAPT, to inhibit cleavage and release of Notch Intracellular Domain (NICD) blocked cardiac architectural regeneration and restoration of ventricular function at day 60 (P < 0.05, n = 6-14). Our approach provides a high-throughput model with translational implications for drug discovery and genetic modifiers of chemotherapy-induced cardiomyopathy.

  8. Application of Reflectance Transformation Imaging Technique to Improve Automated Edge Detection in a Fossilized Oyster Reef

    Science.gov (United States)

    Djuricic, Ana; Puttonen, Eetu; Harzhauser, Mathias; Dorninger, Peter; Székely, Balázs; Mandic, Oleg; Nothegger, Clemens; Molnár, Gábor; Pfeifer, Norbert

    2016-04-01

    The world's largest fossilized oyster reef is located in Stetten, Lower Austria excavated during field campaigns of the Natural History Museum Vienna between 2005 and 2008. It is studied in paleontology to learn about change in climate from past events. In order to support this study, a laser scanning and photogrammetric campaign was organized in 2014 for 3D documentation of the large and complex site. The 3D point clouds and high resolution images from this field campaign are visualized by photogrammetric methods in form of digital surface models (DSM, 1 mm resolution) and orthophoto (0.5 mm resolution) to help paleontological interpretation of data. Due to size of the reef, automated analysis techniques are needed to interpret all digital data obtained from the field. One of the key components in successful automation is detection of oyster shell edges. We have tested Reflectance Transformation Imaging (RTI) to visualize the reef data sets for end-users through a cultural heritage viewing interface (RTIViewer). The implementation includes a Lambert shading method to visualize DSMs derived from terrestrial laser scanning using scientific software OPALS. In contrast to shaded RTI no devices consisting of a hardware system with LED lights, or a body to rotate the light source around the object are needed. The gray value for a given shaded pixel is related to the angle between light source and the normal at that position. Brighter values correspond to the slope surfaces facing the light source. Increasing of zenith angle results in internal shading all over the reef surface. In total, oyster reef surface contains 81 DSMs with 3 m x 2 m each. Their surface was illuminated by moving the virtual sun every 30 degrees (12 azimuth angles from 20-350) and every 20 degrees (4 zenith angles from 20-80). This technique provides paleontologists an interactive approach to virtually inspect the oyster reef, and to interpret the shell surface by changing the light source direction

  9. Screening of subfertile men for testicular carcinoma in situ by an automated image analysis-based cytological test of the ejaculate

    DEFF Research Database (Denmark)

    Almstrup, K; Lippert, Marianne; Mogensen, Hanne O

    2011-01-01

    and detected in ejaculates with specific CIS markers. We have built a high throughput framework involving automated immunocytochemical staining, scanning microscopy and in silico image analysis allowing automated detection and grading of CIS-like stained objects in semen samples. In this study, 1175 ejaculates...... a slightly lower sensitivity (0.51), possibly because of obstruction. We conclude that this novel non-invasive test combining automated immunocytochemistry and advanced image analysis allows identification of TC at the CIS stage with a high specificity, but a negative test does not completely exclude CIS...

  10. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    Science.gov (United States)

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  11. Three-Dimensional Reconstruction of the Bony Nasolacrimal Canal by Automated Segmentation of Computed Tomography Images.

    Directory of Open Access Journals (Sweden)

    Lucia Jañez-Garcia

    Full Text Available To apply a fully automated method to quantify the 3D structure of the bony nasolacrimal canal (NLC from CT scans whereby the size and main morphometric characteristics of the canal can be determined.Cross-sectional study.36 eyes of 18 healthy individuals.Using software designed to detect the boundaries of the NLC on CT images, 36 NLC reconstructions were prepared. These reconstructions were then used to calculate NLC volume. The NLC axis in each case was determined according to a polygonal model and to 2nd, 3rd and 4th degree polynomials. From these models, NLC sectional areas and length were determined. For each variable, descriptive statistics and normality tests (Kolmogorov-Smirnov and Shapiro-Wilk were established.Time for segmentation, NLC volume, axis, sectional areas and length.Mean processing time was around 30 seconds for segmenting each canal. All the variables generated were normally distributed. Measurements obtained using the four models polygonal, 2nd, 3rd and 4th degree polynomial, respectively, were: mean canal length 14.74, 14.3, 14.80, and 15.03 mm; mean sectional area 15.15, 11.77, 11.43, and 11.56 mm2; minimum sectional area 8.69, 7.62, 7.40, and 7.19 mm2; and mean depth of minimum sectional area (craniocaudal 7.85, 7.71, 8.19, and 8.08 mm.The method proposed automatically reconstructs the NLC on CT scans. Using these reconstructions, morphometric measurements can be calculated from NLC axis estimates based on polygonal and 2nd, 3rd and 4th polynomial models.

  12. Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.; Virden, Daniel J.; Myers, Joshua R.; Maxwell, Adam R.

    2012-09-01

    Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objects recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.

  13. Automated integer programming based separation of arteries and veins from thoracic CT images.

    Science.gov (United States)

    Payer, Christian; Pienn, Michael; Bálint, Zoltán; Shekhovtsov, Alexander; Talakic, Emina; Nagy, Eszter; Olschewski, Andrea; Olschewski, Horst; Urschler, Martin

    2016-12-01

    Automated computer-aided analysis of lung vessels has shown to yield promising results for non-invasive diagnosis of lung diseases. To detect vascular changes which affect pulmonary arteries and veins differently, both compartments need to be identified. We present a novel, fully automatic method that separates arteries and veins in thoracic computed tomography images, by combining local as well as global properties of pulmonary vessels. We split the problem into two parts: the extraction of multiple distinct vessel subtrees, and their subsequent labeling into arteries and veins. Subtree extraction is performed with an integer program (IP), based on local vessel geometry. As naively solving this IP is time-consuming, we show how to drastically reduce computational effort by reformulating it as a Markov Random Field. Afterwards, each subtree is labeled as either arterial or venous by a second IP, using two anatomical properties of pulmonary vessels: the uniform distribution of arteries and veins, and the parallel configuration and close proximity of arteries and bronchi. We evaluate algorithm performance by comparing the results with 25 voxel-based manual reference segmentations. On this dataset, we show good performance of the subtree extraction, consisting of very few non-vascular structures (median value: 0.9%) and merged subtrees (median value: 0.6%). The resulting separation of arteries and veins achieves a median voxel-based overlap of 96.3% with the manual reference segmentations, outperforming a state-of-the-art interactive method. In conclusion, our novel approach provides an opportunity to become an integral part of computer aided pulmonary diagnosis, where artery/vein separation is important. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images.

    Science.gov (United States)

    Burns, Joseph E; Yao, Jianhua; Summers, Ronald M

    2017-09-01

    Purpose To create and validate a computer system with which to detect, localize, and classify compression fractures and measure bone density of thoracic and lumbar vertebral bodies on computed tomographic (CT) images. Materials and Methods Institutional review board approval was obtained, and informed consent was waived in this HIPAA-compliant retrospective study. A CT study set of 150 patients (mean age, 73 years; age range, 55-96 years; 92 women, 58 men) with (n = 75) and without (n = 75) compression fractures was assembled. All case patients were age and sex matched with control subjects. A total of 210 thoracic and lumbar vertebrae showed compression fractures and were electronically marked and classified by a radiologist. Prototype fully automated spinal segmentation and fracture detection software were then used to analyze the study set. System performance was evaluated with free-response receiver operating characteristic analysis. Results Sensitivity for detection or localization of compression fractures was 95.7% (201 of 210; 95% confidence interval [CI]: 87.0%, 98.9%), with a false-positive rate of 0.29 per patient. Additionally, sensitivity was 98.7% and specificity was 77.3% at case-based receiver operating characteristic curve analysis. Accuracy for classification by Genant type (anterior, middle, or posterior height loss) was 0.95 (107 of 113; 95% CI: 0.89, 0.98), with weighted κ of 0.90 (95% CI: 0.81, 0.99). Accuracy for categorization by Genant height loss grade was 0.68 (77 of 113; 95% CI: 0.59, 0.76), with a weighted κ of 0.59 (95% CI: 0.47, 0.71). The average bone attenuation for T12-L4 vertebrae was 146 HU ± 29 (standard deviation) in case patients and 173 HU ± 42 in control patients; this difference was statistically significant (P high sensitivity and with a low false-positive rate, as well as to calculate vertebral bone density, on CT images. © RSNA, 2017 Online supplemental material is available for this article.

  15. Automation of a high-speed imaging setup for differential viscosity measurements

    Science.gov (United States)

    Hurth, C.; Duane, B.; Whitfield, D.; Smith, S.; Nordquist, A.; Zenhausern, F.

    2013-12-01

    We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have been reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an "unknown" solution of hydroxyethyl cellulose.

  16. Automated jitter correction for IR image processing to assess the quality of W7-X high heat flux components

    International Nuclear Information System (INIS)

    Greuner, H; De Marne, P; Herrmann, A; Boeswirth, B; Schindler, T; Smirnow, M

    2009-01-01

    An automated IR image processing method was developed to evaluate the surface temperature distribution of cyclically loaded high heat flux (HHF) plasma facing components. IPP Garching will perform the HHF testing of a high percentage of the series production of the WENDELSTEIN 7-X (W7-X) divertor targets to minimize the number of undiscovered uncertainties in the finally installed components. The HHF tests will be performed as quality assurance (QA) complementary to the non-destructive examination (NDE) methods used during the manufacturing. The IR analysis of an HHF-loaded component detects growing debonding of the plasma facing material, made of carbon fibre composite (CFC), after a few thermal cycles. In the case of the prototype testing, the IR data was processed manually. However, a QA method requires a reliable, reproducible and efficient automated procedure. Using the example of the HHF testing of W7-X pre-series target elements, the paper describes the developed automated IR image processing method. The algorithm is based on an iterative two-step correlation analysis with an individually defined reference pattern for the determination of the jitter.

  17. Optimized and Automated Radiosynthesis of [18F]DHMT for Translational Imaging of Reactive Oxygen Species with Positron Emission Tomography

    Directory of Open Access Journals (Sweden)

    Wenjie Zhang

    2016-12-01

    Full Text Available Reactive oxygen species (ROS play important roles in cell signaling and homeostasis. However, an abnormally high level of ROS is toxic, and is implicated in a number of diseases. Positron emission tomography (PET imaging of ROS can assist in the detection of these diseases. For the purpose of clinical translation of [18F]6-(4-((1-(2-fluoroethyl-1H-1,2,3-triazol-4-ylmethoxyphenyl-5-methyl-5,6-dihydrophenanthridine-3,8-diamine ([18F]DHMT, a promising ROS PET radiotracer, we first manually optimized the large-scale radiosynthesis conditions and then implemented them in an automated synthesis module. Our manual synthesis procedure afforded [18F]DHMT in 120 min with overall radiochemical yield (RCY of 31.6% ± 9.3% (n = 2, decay-uncorrected and specific activity of 426 ± 272 GBq/µmol (n = 2. Fully automated radiosynthesis of [18F]DHMT was achieved within 77 min with overall isolated RCY of 6.9% ± 2.8% (n = 7, decay-uncorrected and specific activity of 155 ± 153 GBq/µmol (n = 7 at the end of synthesis. This study is the first demonstration of producing 2-[18F]fluoroethyl azide by an automated module, which can be used for a variety of PET tracers through click chemistry. It is also the first time that [18F]DHMT was successfully tested for PET imaging in a healthy beagle dog.

  18. Simplified automated image analysis for detection and phenotyping of Mycobacterium tuberculosis on porous supports by monitoring growing microcolonies.

    Directory of Open Access Journals (Sweden)

    Alice L den Hertog

    Full Text Available BACKGROUND: Even with the advent of nucleic acid (NA amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS, as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO supports. Repeated imaging during colony growth greatly simplifies "computer vision" and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. SIGNIFICANCE: Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation.

  19. Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Burlina, Philippe M; Joshi, Neil; Pekala, Michael; Pacheco, Katia D; Freund, David E; Bressler, Neil M

    2017-11-01

    Age-related macular degeneration (AMD) affects millions of people throughout the world. The intermediate stage may go undetected, as it typically is asymptomatic. However, the preferred practice patterns for AMD recommend identifying individuals with this stage of the disease to educate how to monitor for the early detection of the choroidal neovascular stage before substantial vision loss has occurred and to consider dietary supplements that might reduce the risk of the disease progressing from the intermediate to the advanced stage. Identification, though, can be time-intensive and requires expertly trained individuals. To develop methods for automatically detecting AMD from fundus images using a novel application of deep learning methods to the automated assessment of these images and to leverage artificial intelligence advances. Deep convolutional neural networks that are explicitly trained for performing automated AMD grading were compared with an alternate deep learning method that used transfer learning and universal features and with a trained clinical grader. Age-related macular degeneration automated detection was applied to a 2-class classification problem in which the task was to distinguish the disease-free/early stages from the referable intermediate/advanced stages. Using several experiments that entailed different data partitioning, the performance of the machine algorithms and human graders in evaluating over 130 000 images that were deidentified with respect to age, sex, and race/ethnicity from 4613 patients against a gold standard included in the National Institutes of Health Age-related Eye Disease Study data set was evaluated. Accuracy, receiver operating characteristics and area under the curve, and kappa score. The deep convolutional neural network method yielded accuracy (SD) that ranged between 88.4% (0.5%) and 91.6% (0.1%), the area under the receiver operating characteristic curve was between 0.94 and 0.96, and kappa coefficient (SD

  20. A new automated assessment method for contrast–detail images by applying support vector machine and its robustness to nonlinear image processing

    International Nuclear Information System (INIS)

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kumiharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-01-01

    The automated contrast–detail (C–D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C–D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C–D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5–5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C–D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C–D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C–D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  1. Automated image analysis to quantify the subnuclear organization of transcriptional coregulatory protein complexes in living cell populations

    Science.gov (United States)

    Voss, Ty C.; Demarco, Ignacio A.; Booker, Cynthia F.; Day, Richard N.

    2004-06-01

    Regulated gene transcription is dependent on the steady-state concentration of DNA-binding and coregulatory proteins assembled in distinct regions of the cell nucleus. For example, several different transcriptional coactivator proteins, such as the Glucocorticoid Receptor Interacting Protein (GRIP), localize to distinct spherical intranuclear bodies that vary from approximately 0.2-1 micron in diameter. We are using multi-spectral wide-field microscopy of cells expressing coregulatory proteins labeled with the fluorescent proteins (FP) to study the mechanisms that control the assembly and distribution of these structures in living cells. However, variability between cells in the population makes an unbiased and consistent approach to this image analysis absolutely critical. To address this challenge, we developed a protocol for rigorous quantification of subnuclear organization in cell populations. Cells transiently co-expressing a green FP (GFP)-GRIP and the monomeric red FP (mRFP) are selected for imaging based only on the signal in the red channel, eliminating bias due to knowledge of coregulator organization. The impartially selected images of the GFP-coregulatory protein are then analyzed using an automated algorithm to objectively identify and measure the intranuclear bodies. By integrating all these features, this combination of unbiased image acquisition and automated analysis facilitates the precise and consistent measurement of thousands of protein bodies from hundreds of individual living cells that represent the population.

  2. Automated multiscale morphometry of muscle disease from second harmonic generation microscopy using tensor-based image processing.

    Science.gov (United States)

    Garbe, Christoph S; Buttgereit, Andreas; Schürmann, Sebastian; Friedrich, Oliver

    2012-01-01

    Practically, all chronic diseases are characterized by tissue remodeling that alters organ and cellular function through changes to normal organ architecture. Some morphometric alterations become irreversible and account for disease progression even on cellular levels. Early diagnostics to categorize tissue alterations, as well as monitoring progression or remission of disturbed cytoarchitecture upon treatment in the same individual, are a new emerging field. They strongly challenge spatial resolution and require advanced imaging techniques and strategies for detecting morphological changes. We use a combined second harmonic generation (SHG) microscopy and automated image processing approach to quantify morphology in an animal model of inherited Duchenne muscular dystrophy (mdx mouse) with age. Multiphoton XYZ image stacks from tissue slices reveal vast morphological deviation in muscles from old mdx mice at different scales of cytoskeleton architecture: cell calibers are irregular, myofibrils within cells are twisted, and sarcomere lattice disruptions (detected as "verniers") are larger in number compared to samples from healthy mice. In young mdx mice, such alterations are only minor. The boundary-tensor approach, adapted and optimized for SHG data, is a suitable approach to allow quick quantitative morphometry in whole tissue slices. The overall detection performance of the automated algorithm compares very well with manual "by eye" detection, the latter being time consuming and prone to subjective errors. Our algorithm outperfoms manual detection by time with similar reliability. This approach will be an important prerequisite for the implementation of a clinical image databases to diagnose and monitor specific morphological alterations in chronic (muscle) diseases. © 2011 IEEE

  3. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. I: test of the methodology

    International Nuclear Information System (INIS)

    Berrezueta, E.; Castroviejo, R.

    2007-01-01

    Ore microscopy has traditionally been an important support to control ore processing, but the volume of present day processes is beyond the reach of human operators. Automation is therefore compulsory, but its development through digital image analysis, DIA, is limited by various problems, such as the similarity in reflectance values of some important ores, their anisotropism, and the performance of instruments and methods. The results presented show that automated identification and quantification by DIA are possible through multiband (RGB) determinations with a research 3CCD video camera on reflected light microscope. These results were obtained by systematic measurement of selected ores accounting for most of the industrial applications. Polarized light is avoided, so the effects of anisotropism can be neglected. Quality control at various stages and statistical analysis are important, as is the application of complementary criteria (e.g. metallogenetic). The sequential methodology is described and shown through practical examples. (Author)

  4. An automated four-point scale scoring of segmental wall motion in echocardiography using quantified parametric images

    International Nuclear Information System (INIS)

    Kachenoura, N; Delouche, A; Ruiz Dominguez, C; Frouin, F; Diebold, B; Nardi, O

    2010-01-01

    The aim of this paper is to develop an automated method which operates on echocardiographic dynamic loops for classifying the left ventricular regional wall motion (RWM) in a four-point scale. A non-selected group of 37 patients (2 and 4 chamber views) was studied. Each view was segmented according to the standardized segmentation using three manually positioned anatomical landmarks (the apex and the angles of the mitral annulus). The segmented data were analyzed by two independent experienced echocardiographists and the consensual RWM scores were used as a reference for comparisons. A fast and automatic parametric imaging method was used to compute and display as static color-coded parametric images both temporal and motion information contained in left ventricular dynamic echocardiograms. The amplitude and time parametric images were provided to a cardiologist for visual analysis of RWM and used for RWM quantification. A cross-validation method was applied to the segmental quantitative indices for classifying RWM in a four-point scale. A total of 518 segments were analyzed. Comparison between visual interpretation of parametric images and the reference reading resulted in an absolute agreement (Aa) of 66% and a relative agreement (Ra) of 96% and kappa (κ) coefficient of 0.61. Comparison of the automated RWM scoring against the same reference provided Aa = 64%, Ra = 96% and κ = 0.64 on the validation subset. Finally, linear regression analysis between the global quantitative index and global reference scores as well as ejection fraction resulted in correlations of 0.85 and 0.79. A new automated four-point scale scoring of RWM was developed and tested in a non-selected database. Its comparison against a consensual visual reading of dynamic echocardiograms showed its ability to classify RWM abnormalities.

  5. Automated image-based colon cleansing for laxative-free CT colonography computer-aided polyp detection

    International Nuclear Information System (INIS)

    Linguraru, Marius George; Panjwani, Neil; Fletcher, Joel G.; Summer, Ronald M.

    2011-01-01

    Purpose: To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. Methods: An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided doses over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. Results: The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. Conclusions: An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.

  6. Semi-automated image analysis for the assessment of megafaunal densities at the Arctic deep-sea observatory HAUSGARTEN.

    Science.gov (United States)

    Schoening, Timm; Bergmann, Melanie; Ontrup, Jörg; Taylor, James; Dannheim, Jennifer; Gutt, Julian; Purser, Autun; Nattkemper, Tim W

    2012-01-01

    Megafauna play an important role in benthic ecosystem function and are sensitive indicators of environmental change. Non-invasive monitoring of benthic communities can be accomplished by seafloor imaging. However, manual quantification of megafauna in images is labor-intensive and therefore, this organism size class is often neglected in ecosystem studies. Automated image analysis has been proposed as a possible approach to such analysis, but the heterogeneity of megafaunal communities poses a non-trivial challenge for such automated techniques. Here, the potential of a generalized object detection architecture, referred to as iSIS (intelligent Screening of underwater Image Sequences), for the quantification of a heterogenous group of megafauna taxa is investigated. The iSIS system is tuned for a particular image sequence (i.e. a transect) using a small subset of the images, in which megafauna taxa positions were previously marked by an expert. To investigate the potential of iSIS and compare its results with those obtained from human experts, a group of eight different taxa from one camera transect of seafloor images taken at the Arctic deep-sea observatory HAUSGARTEN is used. The results show that inter- and intra-observer agreements of human experts exhibit considerable variation between the species, with a similar degree of variation apparent in the automatically derived results obtained by iSIS. Whilst some taxa (e. g. Bathycrinus stalks, Kolga hyalina, small white sea anemone) were well detected by iSIS (i. e. overall Sensitivity: 87%, overall Positive Predictive Value: 67%), some taxa such as the small sea cucumber Elpidia heckeri remain challenging, for both human observers and iSIS.

  7. Semi-automated image analysis for the assessment of megafaunal densities at the Arctic deep-sea observatory HAUSGARTEN.

    Directory of Open Access Journals (Sweden)

    Timm Schoening

    Full Text Available Megafauna play an important role in benthic ecosystem function and are sensitive indicators of environmental change. Non-invasive monitoring of benthic communities can be accomplished by seafloor imaging. However, manual quantification of megafauna in images is labor-intensive and therefore, this organism size class is often neglected in ecosystem studies. Automated image analysis has been proposed as a possible approach to such analysis, but the heterogeneity of megafaunal communities poses a non-trivial challenge for such automated techniques. Here, the potential of a generalized object detection architecture, referred to as iSIS (intelligent Screening of underwater Image Sequences, for the quantification of a heterogenous group of megafauna taxa is investigated. The iSIS system is tuned for a particular image sequence (i.e. a transect using a small subset of the images, in which megafauna taxa positions were previously marked by an expert. To investigate the potential of iSIS and compare its results with those obtained from human experts, a group of eight different taxa from one camera transect of seafloor images taken at the Arctic deep-sea observatory HAUSGARTEN is used. The results show that inter- and intra-observer agreements of human experts exhibit considerable variation between the species, with a similar degree of variation apparent in the automatically derived results obtained by iSIS. Whilst some taxa (e. g. Bathycrinus stalks, Kolga hyalina, small white sea anemone were well detected by iSIS (i. e. overall Sensitivity: 87%, overall Positive Predictive Value: 67%, some taxa such as the small sea cucumber Elpidia heckeri remain challenging, for both human observers and iSIS.

  8. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography.

    Science.gov (United States)

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A

    2014-03-01

    Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also verified. The maximum

  9. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A., E-mail: willi.kalender@imp.uni-erlangen.de [Institute of Medical Physics, University of Erlangen-Nürnberg, Henkestraße 91, 91052 Erlangen, Germany and CT Imaging GmbH, 91052 Erlangen (Germany)

    2014-03-15

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  10. Toward the virtual cell: Automated approaches to building models of subcellular organization “learned” from microscopy images

    Science.gov (United States)

    Buck, Taráz E.; Li, Jieyue; Rohde, Gustavo K.; Murphy, Robert F.

    2012-01-01

    We review state-of-the-art computational methods for constructing, from image data, generative statistical models of cellular and nuclear shapes and the arrangement of subcellular structures and proteins within them. These automated approaches allow consistent analysis of images of cells for the purposes of learning the range of possible phenotypes, discriminating between them, and informing further investigation. Such models can also provide realistic geometry and initial protein locations to simulations in order to better understand cellular and subcellular processes. To determine the structures of cellular components and how proteins and other molecules are distributed among them, the generative modeling approach described here can be coupled with high throughput imaging technology to infer and represent subcellular organization from data with few a priori assumptions. We also discuss potential improvements to these methods and future directions for research. PMID:22777818

  11. Automated segmentation and isolation of touching cell nuclei in cytopathology smear images of pleural effusion using distance transform watershed method

    Science.gov (United States)

    Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko

    2017-06-01

    The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.

  12. Automated grading of left ventricular segmental wall motion by an artificial neural network using color kinesis images

    Directory of Open Access Journals (Sweden)

    L.O. Murta Jr.

    2006-01-01

    Full Text Available The present study describes an auxiliary tool in the diagnosis of left ventricular (LV segmental wall motion (WM abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN was developed and validated for grading LV segmental WM using data from color kinesis (CK images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1 normal, 2 mild hypokinesia, 3 moderate hypokinesia, 4 severe hypokinesia, 5 akinesia, and 6 dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R² = 0.99. In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.

  13. Automated segmentation of geographic atrophy of the retinal epithelium via random forests in AREDS color fundus images.

    Science.gov (United States)

    Feeny, Albert K; Tadarati, Mongkol; Freund, David E; Bressler, Neil M; Burlina, Philippe

    2015-10-01

    Age-related macular degeneration (AMD), left untreated, is the leading cause of vision loss in people older than 55. Severe central vision loss occurs in the advanced stage of the disease, characterized by either the in growth of choroidal neovascularization (CNV), termed the "wet" form, or by geographic atrophy (GA) of the retinal pigment epithelium (RPE) involving the center of the macula, termed the "dry" form. Tracking the change in GA area over time is important since it allows for the characterization of the effectiveness of GA treatments. Tracking GA evolution can be achieved by physicians performing manual delineation of GA area on retinal fundus images. However, manual GA delineation is time-consuming and subject to inter-and intra-observer variability. We have developed a fully automated GA segmentation algorithm in color fundus images that uses a supervised machine learning approach employing a random forest classifier. This algorithm is developed and tested using a dataset of images from the NIH-sponsored Age Related Eye Disease Study (AREDS). GA segmentation output was compared against a manual delineation by a retina specialist. Using 143 color fundus images from 55 different patient eyes, our algorithm achieved PPV of 0.82±0.19, and NPV of 0:95±0.07. This is the first study, to our knowledge, applying machine learning methods to GA segmentation on color fundus images and using AREDS imagery for testing. These preliminary results show promising evidence that machine learning methods may have utility in automated characterization of GA from color fundus images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  15. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    Directory of Open Access Journals (Sweden)

    Nicolas eRey-Villamizar

    2014-04-01

    Full Text Available In this article, we describe use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis task, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral brain tissue images surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels, 6,000$times$10,000$times$500 voxels with 16 bits/voxel, implying image sizes exceeding 250GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analytics for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment consisting. Our Python script enables efficient data storage and movement between compute and storage servers, logging all processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  16. CometQ: An automated tool for the detection and quantification of DNA damage using comet assay image analysis.

    Science.gov (United States)

    Ganapathy, Sreelatha; Muraleedharan, Aparna; Sathidevi, Puthumangalathu Savithri; Chand, Parkash; Rajkumar, Ravi Philip

    2016-09-01

    DNA damage analysis plays an important role in determining the approaches for treatment and prevention of various diseases like cancer, schizophrenia and other heritable diseases. Comet assay is a sensitive and versatile method for DNA damage analysis. The main objective of this work is to implement a fully automated tool for the detection and quantification of DNA damage by analysing comet assay images. The comet assay image analysis consists of four stages: (1) classifier (2) comet segmentation (3) comet partitioning and (4) comet quantification. Main features of the proposed software are the design and development of four comet segmentation methods, and the automatic routing of the input comet assay image to the most suitable one among these methods depending on the type of the image (silver stained or fluorescent stained) as well as the level of DNA damage (heavily damaged or lightly/moderately damaged). A classifier stage, based on support vector machine (SVM) is designed and implemented at the front end, to categorise the input image into one of the above four groups to ensure proper routing. Comet segmentation is followed by comet partitioning which is implemented using a novel technique coined as modified fuzzy clustering. Comet parameters are calculated in the comet quantification stage and are saved in an excel file. Our dataset consists of 600 silver stained images obtained from 40 Schizophrenia patients with different levels of severity, admitted to a tertiary hospital in South India and 56 fluorescent stained images obtained from different internet sources. The performance of "CometQ", the proposed standalone application for automated analysis of comet assay images, is evaluated by a clinical expert and is also compared with that of a most recent and related software-OpenComet. CometQ gave 90.26% positive predictive value (PPV) and 93.34% sensitivity which are much higher than those of OpenComet, especially in the case of silver stained images. The

  17. Colorimetric focus-forming assay with automated focus counting by image analysis for quantification of infectious hepatitis C virions.

    Directory of Open Access Journals (Sweden)

    Wonseok Kang

    Full Text Available Hepatitis C virus (HCV infection is the leading cause of liver transplantation in Western countries. Studies of HCV infection using cell culture-produced HCV (HCVcc in vitro systems require quantification of infectious HCV virions, which has conventionally been performed by immunofluorescence-based focus-forming assay with manual foci counting; however, this is a laborious and time-consuming procedure with potentially biased results. In the present study, we established and optimized a method for convenient and objective quantification of HCV virions by colorimetric focus-forming assay with automated focus counting by image analysis. In testing different enzymes and chromogenic substrates, we obtained superior foci development using alkaline phosphatase-conjugated secondary antibody with BCIP/NBT chromogenic substrate. We additionally found that type I collagen coating minimized cell detachment during vigorous washing of the assay plate. After the colorimetric focus-forming assay, the foci number was determined using an ELISpot reader and image analysis software. The foci number and the calculated viral titer determined by this method strongly correlated with those determined by immunofluorescence-based focus-forming assay and manual foci counting. These results indicate that colorimetric focus-forming assay with automated focus counting by image analysis is applicable as a more-efficient and objective method for quantification of infectious HCV virions.

  18. Automated identification of abnormal metaphase chromosome cells for the detection of chronic myeloid leukemia using microscopic images

    Science.gov (United States)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Mulvihill, John J.; Chen, Xiaodong; Liu, Hong

    2010-07-01

    Karyotyping is an important process to classify chromosomes into standard classes and the results are routinely used by the clinicians to diagnose cancers and genetic diseases. However, visual karyotyping using microscopic images is time-consuming and tedious, which reduces the diagnostic efficiency and accuracy. Although many efforts have been made to develop computerized schemes for automated karyotyping, no schemes can get be performed without substantial human intervention. Instead of developing a method to classify all chromosome classes, we develop an automatic scheme to detect abnormal metaphase cells by identifying a specific class of chromosomes (class 22) and prescreen for suspicious chronic myeloid leukemia (CML). The scheme includes three steps: (1) iteratively segment randomly distributed individual chromosomes, (2) process segmented chromosomes and compute image features to identify the candidates, and (3) apply an adaptive matching template to identify chromosomes of class 22. An image data set of 451 metaphase cells extracted from bone marrow specimens of 30 positive and 30 negative cases for CML is selected to test the scheme's performance. The overall case-based classification accuracy is 93.3% (100% sensitivity and 86.7% specificity). The results demonstrate the feasibility of applying an automated scheme to detect or prescreen the suspicious cancer cases.

  19. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    methodology for volume determinations (maximal error 6.3%). Preceded by the determination of reproducibility and the optimal threshold at the available MR unit, automated 'threshold' segmentation appears to be acceptable when changes rather than absolute values of synovial membrane volumes are most important......, osteoarthritis (OA) 16] and 17 RA wrists were examined. At enhancement thresholds between 30 and 60%, the automated volumes (Syn(x%)) were highly significantly correlated to manual volumes (SynMan) (knees: rho = 0.78-0.91, P 6)). The absolute...

  20. Study of geologic-structural situation around Semipalatinsk test site test - holes using space images automated decoding method

    International Nuclear Information System (INIS)

    Gorbunova, Eh.M.; Ivanchenko, G.N.

    2004-01-01

    Performance of underground nuclear explosions (UNE) leads to irreversible changes in geological environment around the boreholes. In natural environment it was detected inhomogeneity of rock massif condition changes, which depended on characteristics of the underground nuclear explosion, anisotropy of medium and presence of faulting. Application of automated selection and statistic analysis of unstretched lineaments in high resolution space images using special software pack LESSA allows specifying the geologic-structural features of Semipalatinsk Test Site (STS), ranging selected fracture zones, outlining and analyzing post-explosion zone surface deformations. (author)

  1. Application of automated image analysis reduces the workload of manual screening of sentinel lymph node biopsies in breast cancer

    DEFF Research Database (Denmark)

    Holten-Rossing, Henrik; Talman, Maj-Lis Møller; Jylling, Anne Marie Bak

    2017-01-01

    AIMS: Breast cancer is one of the most common cancer diseases in women, with >1.67 million cases being diagnosed worldwide each year. In breast cancer, the sentinel lymph node (SLN) pinpoints the first lymph node(s) into which the tumour spreads, and it is usually located in the ipsilateral axill...... tool for selecting those slides that a pathologist does not need to see. The implementation of automated digital image analysis of SLNBs in breast cancer would decrease the workload in this context for examining pathologists by almost 60%....

  2. A new automated method for analysis of gated-SPECT images based on a three-dimensional heart shaped model

    DEFF Research Database (Denmark)

    Lomsky, Milan; Richter, Jens; Johansson, Lena

    2005-01-01

    A new automated method for quantification of left ventricular function from gated-single photon emission computed tomography (SPECT) images has been developed. The method for quantification of cardiac function (CAFU) is based on a heart shaped model and the active shape algorithm. The model....... In the patient group the EDV calculated using QGS and CAFU showed good agreement for large hearts and higher CAFU values compared with QGS for the smaller hearts. In the larger hearts, ESV was much larger for QGS than for CAFU both in the phantom and patient studies. In the smallest hearts there was good...

  3. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer

    NARCIS (Netherlands)

    L. Bondar (Luiza); M.S. Hoogeman (Mischa); W. Schillemans; B.J.M. Heijmen (Ben)

    2013-01-01

    textabstractFor online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and

  4. Immunohistochemical Ki-67/KL1 double stains increase accuracy of Ki-67 indices in breast cancer and simplify automated image analysis

    DEFF Research Database (Denmark)

    Nielsen, Patricia S; Bentzer, Nina K; Jensen, Vibeke

    2014-01-01

    observers and automated image analysis. RESULTS: Indices were predominantly higher for single stains than double stains (P≤0.002), yet the difference between observers was statistically significant (Pmanual and automated indices ranged from 0...... by digital image analysis. This study aims to detect the difference in accuracy and precision between manual indices of single and double stains, to develop an automated quantification of double stains, and to explore the relation between automated indices and tumor characteristics when quantified...... in different regions: hot spots, global tumor areas, and invasive fronts. MATERIALS AND METHODS: Paraffin-embedded, formalin-fixed tissue from 100 consecutive patients with invasive breast cancer was immunohistochemically stained for Ki-67 and Ki-67/KL1. Ki-67 was manually scored in different regions by 2...

  5. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    International Nuclear Information System (INIS)

    Reyhan, M; Yue, N

    2014-01-01

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm 2 ). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation. Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize

  6. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reyhan, M; Yue, N [Rutgers University, New Brunswick, NJ (United States)

    2014-06-01

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation. Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help

  7. Automated Leaf Tracking using Multi-view Image Sequences of Maize Plants for Leaf-growth Monitoring

    Science.gov (United States)

    Das Choudhury, S.; Awada, T.; Samal, A.; Stoerger, V.; Bashyam, S.

    2017-12-01

    Extraction of phenotypes with botanical importance by analyzing plant image sequences has the desirable advantages of non-destructive temporal phenotypic measurements of a large number of plants with little or no manual intervention in a relatively short period of time. The health of a plant is best interpreted by the emergence timing and temporal growth of individual leaves. For automated leaf growth monitoring, it is essential to track each leaf throughout the life cycle of the plant. Plants are constantly changing organisms with increasing complexity in architecture due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. The leaf cross-overs pose challenges to accurately track each leaf using single view image sequence. Thus, we introduce a novel automated leaf tracking algorithm using a graph theoretic approach by multi-view image sequence analysis based on the determination of leaf-tips and leaf-junctions in the 3D space. The basis of the leaf tracking algorithm is: the leaves emerge using bottom-up approach in the case of a maize plant, and the direction of leaf emergence strictly alternates in terms of direction. The algorithm involves labeling of the individual parts of a plant, i.e., leaves and stem, following graphical representation of the plant skeleton, i.e., one-pixel wide connected line obtained from the binary image. The length of the leaf is measured by the number of pixels in the leaf skeleton. To evaluate the performance of the algorithm, a benchmark dataset is indispensable. Thus, we publicly release University of Nebraska-Lincoln Component Plant Phenotyping dataset-2 (UNL-CPPD-2) consisting of images of the 20 maize plants captured by visible light camera of the Lemnatec Scanalyzer 3D high throughout plant phenotyping facility once daily for 60 days from 10 different views. The dataset is aimed to facilitate the development and evaluation of leaf tracking algorithms and their uniform comparisons.

  8. Automated wholeslide analysis of multiplex-brightfield IHC images for cancer cells and carcinoma-associated fibroblasts

    Science.gov (United States)

    Lorsakul, Auranuch; Andersson, Emilia; Vega Harring, Suzana; Sade, Hadassah; Grimm, Oliver; Bredno, Joerg

    2017-03-01

    Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage, the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian, and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm reproduced the pathologists' areas with R2=0.96. The proposed methodology enables automated image analysis to measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.

  9. Automated daily breath hold stability measurements by real-time imaging in radiotherapy of breast cancer

    NARCIS (Netherlands)

    De Boer, Hans C J; Van Den Bongard, Desirée J G; van Asselen, B

    2016-01-01

    Background and purpose Breath hold is increasingly used for cardiac sparing in left-sided breast cancer irradiation. We have developed a fast automated method to verify breath hold stability in each treatment fraction. Material and methods We evaluated 504 patients treated with breath hold. Moderate

  10. Preliminary Full-Scale Tests of the Center for Automated Processing of Hardwoods' Auto-Image

    Science.gov (United States)

    Philip A. Araman; Janice K. Wiedenbeck

    1995-01-01

    Automated lumber grading and yield optimization using computer controlled saws will be plausible for hardwoods if and when lumber scanning systems can reliably identify all defects by type. Existing computer programs could then be used to grade the lumber, identify the best cut-up solution, and control the sawing machines. The potential value of a scanning grading...

  11. An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture

    Science.gov (United States)

    2015-11-04

    consuming and labour intensive, and the quality is dependent on the individual doing the task. This paper describes a quick and fully automated method for...generally considered to be supervised classification techniques in that they require the active input of a trained analyst to define the characteristics of

  12. Crowdsourcing image annotation for nucleus detection and segmentation in computational pathology: evaluating experts, automated methods, and the crowd.

    Science.gov (United States)

    Irshad, H; Montaser-Kouhsari, L; Waltz, G; Bucur, O; Nowak, J A; Dong, F; Knoblauch, N W; Beck, A H

    2015-01-01

    The development of tools in computational pathology to assist physicians and biomedical scientists in the diagnosis of disease requires access to high-quality annotated images for algorithm learning and evaluation. Generating high-quality expert-derived annotations is time-consuming and expensive. We explore the use of crowdsourcing for rapidly obtaining annotations for two core tasks in com- putational pathology: nucleus detection and nucleus segmentation. We designed and implemented crowdsourcing experiments using the CrowdFlower platform, which provides access to a large set of labor channel partners that accesses and manages millions of contributors worldwide. We obtained annotations from four types of annotators and compared concordance across these groups. We obtained: crowdsourced annotations for nucleus detection and segmentation on a total of 810 images; annotations using automated methods on 810 images; annotations from research fellows for detection and segmentation on 477 and 455 images, respectively; and expert pathologist-derived annotations for detection and segmentation on 80 and 63 images, respectively. For the crowdsourced annotations, we evaluated performance across a range of contributor skill levels (1, 2, or 3). The crowdsourced annotations (4,860 images in total) were completed in only a fraction of the time and cost required for obtaining annotations using traditional methods. For the nucleus detection task, the research fellow-derived annotations showed the strongest concordance with the expert pathologist- derived annotations (F-M =93.68%), followed by the crowd-sourced contributor levels 1,2, and 3 and the automated method, which showed relatively similar performance (F-M = 87.84%, 88.49%, 87.26%, and 86.99%, respectively). For the nucleus segmentation task, the crowdsourced contributor level 3-derived annotations, research fellow-derived annotations, and automated method showed the strongest concordance with the expert pathologist

  13. Automated computer quantification of breast cancer in small-animal models using PET-guided MR image co-segmentation.

    Science.gov (United States)

    Bagci, Ulas; Kramer-Marek, Gabriela; Mollura, Daniel J

    2013-07-05

    -Affibody-PET 2 days after the scheduled structural imaging (MRI and CT). After CT and MR images were co-registered with corresponding PET images, all images were quantitatively analyzed by the proposed segmentation technique.Automatically determined anatomical tumor volumes were compared to radiologist-derived reference truths. Observer agreements were presented through Bland-Altman and linear regression analyses. Segmentation evaluations were conducted using true-positive (TP) and false-positive (FP) volume fractions of delineated tissue samples, as complied with the state-of-the-art evaluation techniques for image segmentation. Moreover, the PET images, obtained using different radiotracers, were examined and compared using the complex wavelet-based structural similarity index (CWSSI). (continued on the next page) (continued from the previous page) PET/MR dual modality imaging using the 18F-Z HER2-Affibody imaging agent provided diagnostic image quality in all mice with excellent tumor delineations by the proposed method. The 18F-FDG radiotracer did not show accurate identification of the tumor regions. Structural similarity index (CWSSI) between PET images using 18F-FDG and 18F-Z HER2-Affibody agents was found to be 0.7838. MR showed higher diagnostic image quality when compared to CT because of its better soft tissue contrast. Significant correlations regarding the anatomical tumor volumes were obtained between both PET-guided MRI co-segmentation and reference truth (R2=0.92, pprocess well in the anatomical image domain for extracting accurate tumor volume information. Furthermore, the use of 18F-FDG radiotracer was not as successful as the 18F-Z HER2-Affibody in guiding the delineation process due to false-positive uptake regions in the neighborhood of tumor regions; hence, the accuracy of the fully automated segmentation method changed dramatically. Last, we qualitatively showed that MRI yields superior identification of tumor boundaries when compared to conventional

  14. AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images

    Science.gov (United States)

    Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.

    2017-01-01

    Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.

  15. Automated segmentation of thyroid gland on CT images with multi-atlas label fusion and random classification forest

    Science.gov (United States)

    Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald

    2015-03-01

    The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.

  16. Automated cortical bone segmentation for multirow-detector CT imaging with validation and application to human studies.

    Science.gov (United States)

    Li, Cheng; Jin, Dakai; Chen, Cheng; Letuchy, Elena M; Janz, Kathleen F; Burns, Trudy L; Torner, James C; Levy, Steven M; Saha, Punam K

    2015-08-01

    Cortical bone supports and protects human skeletal functions and plays an important role in determining bone strength and fracture risk. Cortical bone segmentation at a peripheral site using multirow-detector CT (MD-CT) imaging is useful for in vivo assessment of bone strength and fracture risk. Major challenges for the task emerge from limited spatial resolution, low signal-to-noise ratio, presence of cortical pores, and structural complexity over the transition between trabecular and cortical bones. An automated algorithm for cortical bone segmentation at the distal tibia from in vivo MD-CT imaging is presented and its performance and application are examined. The algorithm is completed in two major steps-(1) bone filling, alignment, and region-of-interest computation and (2) segmentation of cortical bone. After the first step, the following sequence of tasks is performed to accomplish cortical bone segmentation-(1) detection of marrow space and possible pores, (2) computation of cortical bone thickness, detection of recession points, and confirmation and filling of true pores, and (3) detection of endosteal boundary and delineation of cortical bone. Effective generalizations of several digital topologic and geometric techniques are introduced and a fully automated algorithm is presented for cortical bone segmentation. An accuracy of 95.1% in terms of volume of agreement with manual outlining of cortical bone was observed in human MD-CT scans, while an accuracy of 88.5% was achieved when compared with manual outlining on postregistered high resolution micro-CT imaging. An intraclass correlation coefficient of 0.98 was obtained in cadaveric repeat scans. A pilot study was conducted to describe gender differences in cortical bone properties. This study involved 51 female and 46 male participants (age: 19-20 yr) from the Iowa Bone Development Study. Results from this pilot study suggest that, on average after adjustment for height and weight differences, males have

  17. A simple viability analysis for unicellular cyanobacteria using a new autofluorescence assay, automated microscopy, and ImageJ

    Directory of Open Access Journals (Sweden)

    Schulze Katja

    2011-11-01

    Full Text Available Abstract Background Currently established methods to identify viable and non-viable cells of cyanobacteria are either time-consuming (eg. plating or preparation-intensive (eg. fluorescent staining. In this paper we present a new and fast viability assay for unicellular cyanobacteria, which uses red chlorophyll fluorescence and an unspecific green autofluorescence for the differentiation of viable and non-viable cells without the need of sample preparation. Results The viability assay for unicellular cyanobacteria using red and green autofluorescence was established and validated for the model organism Synechocystis sp. PCC 6803. Both autofluorescence signals could be observed simultaneously allowing a direct classification of viable and non-viable cells. The results were confirmed by plating/colony count, absorption spectra and chlorophyll measurements. The use of an automated fluorescence microscope and a novel ImageJ based image analysis plugin allow a semi-automated analysis. Conclusions The new method simplifies the process of viability analysis and allows a quick and accurate analysis. Furthermore results indicate that a combination of the new assay with absorption spectra or chlorophyll concentration measurements allows the estimation of the vitality of cells.

  18. SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts

    Energy Technology Data Exchange (ETDEWEB)

    Verburg, E; Waard, SN de; Veldhuis, WB; Gils, CH van; Gilhuijs, KGA [University Medical Center Utrecht, Utrecht (Netherlands)

    2016-06-15

    Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selected participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.

  19. Comparison of the automated evaluation of phantom mama in digital and digitalized images; Comparacao da avaliacao automatizada do phantom mama em imagens digitais e digitalizadas

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Priscila do Carmo, E-mail: pcs@cdtn.b [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear. Programa de Pos-Graduacao em Ciencias e Tecnicas Nucleares; Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Fac. de Medicina. Dept. de Propedeutica Complementar; Gomes, Danielle Soares; Oliveira, Marcio Alves; Nogueira, Maria do Socorro, E-mail: mnogue@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    Mammography is an essential tool for diagnosis and early detection of breast cancer if it is provided as a very good quality service. The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. This work compare the automated methodology for the evaluation of scanned digital images the phantom mama. By applied the DIP method techniques was possible determine geometrical and radiometric images evaluated. The evaluated parameters include circular details of low contrast, contrast ratio, spatial resolution, tumor masses, optical density and background in Phantom Mama scanned and digitized images. The both results of images were evaluated. Through this comparison was possible to demonstrate that this automated methodology is presented as a promising alternative for the reduction or elimination of subjectivity in both types of images, but the Phantom Mama present insufficient parameters for spatial resolution evaluation. (author)

  20. Contrast-enhanced magnetic resonance angiography in carotid artery disease: does automated image registration improve image quality?

    International Nuclear Information System (INIS)

    Menke, Jan; Larsen, Joerg

    2009-01-01

    Contrast-enhanced magnetic resonance angiography (MRA) is a noninvasive imaging alternative to digital subtraction angiography (DSA) for patients with carotid artery disease. In DSA, image quality can be improved by shifting the mask image if the patient has moved during angiography. This study investigated whether such image registration may also help to improve the image quality of carotid MRA. Data from 370 carotid MRA examinations of patients likely to have carotid artery disease were prospectively collected. The standard nonregistered MRAs were compared to automatically linear, affine and warp registered MRA by using three image quality parameters: the vessel detection probability (VDP) in maximum intensity projection (MIP) images, contrast-to-noise ratio (CNR) in MIP images, and contrast-to-noise ratio in three-dimensional image volumes. A body shift of less than 1 mm occurred in 96.2% of cases. Analysis of variance revealed no significant influence of image registration and body shift on image quality (p > 0.05). In conclusion, standard contrast-enhanced carotid MRA usually requires no image registration to improve image quality and is generally robust against any naturally occurring body shift. (orig.)

  1. NeuroSeg: automated cell detection and segmentation for in vivo two-photon Ca2+imaging data.

    Science.gov (United States)

    Guan, Jiangheng; Li, Jingcheng; Liang, Shanshan; Li, Ruijie; Li, Xingyi; Shi, Xiaozhe; Huang, Ciyu; Zhang, Jianxiong; Pan, Junxia; Jia, Hongbo; Zhang, Le; Chen, Xiaowei; Liao, Xiang

    2018-01-01

    Two-photon Ca 2+ imaging has become a popular approach for monitoring neuronal population activity with cellular or subcellular resolution in vivo. This approach allows for the recording of hundreds to thousands of neurons per animal and thus leads to a large amount of data to be processed. In particular, manually drawing regions of interest is the most time-consuming aspect of data analysis. However, the development of automated image analysis pipelines, which will be essential for dealing with the likely future deluge of imaging data, remains a major challenge. To address this issue, we developed NeuroSeg, an open-source MATLAB program that can facilitate the accurate and efficient segmentation of neurons in two-photon Ca 2+ imaging data. We proposed an approach using a generalized Laplacian of Gaussian filter to detect cells and weighting-based segmentation to separate individual cells from the background. We tested this approach on an in vivo two-photon Ca 2+ imaging dataset obtained from mouse cortical neurons with differently sized view fields. We show that this approach exhibits superior performance for cell detection and segmentation compared with the existing published tools. In addition, we integrated the previously reported, activity-based segmentation into our approach and found that this combined method was even more promising. The NeuroSeg software, including source code and graphical user interface, is freely available and will be a useful tool for in vivo brain activity mapping.

  2. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    Science.gov (United States)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  3. Automated choroid segmentation of three-dimensional SD-OCT images by incorporating EDI-OCT images.

    Science.gov (United States)

    Chen, Qiang; Niu, Sijie; Fang, Wangyi; Shuai, Yuanlu; Fan, Wen; Yuan, Songtao; Liu, Qinghuai

    2018-05-01

    The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were -1.96µm3 and 88.56%, respectively. Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation. Copyright © 2017. Published by Elsevier B.V.

  4. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    Automated fast (5-20 min) synovial membrane volume determination by MRI, based on pre-set post-gadolinium-DTPA enhancement thresholds, was evaluated as a substitute for a time-consuming (45-120 min), previously validated, manual segmentation method. Twenty-nine knees [rheumatoid arthritis (RA) 13...... values of the automated estimates were extremely dependent on the threshold chosen. At the optimal threshold of 45%, the median numerical difference from SynMan was 7 ml (17%) in knees and 2 ml (25%) in wrists. At this threshold, the difference was not related to diagnosis, clinical inflammation...... or synovial membrane volume, e.g. no systematic errors were found. The inter-MRI variation, evaluated in three knees and three wrists, was higher than by manual segmentation, particularly due to sensitivity to malalignment artefacts. Examination of test objects proved the high accuracy of the general...

  5. Automated classification of inflammation in colon histological sections based on digital microscopy and advanced image analysis.

    Science.gov (United States)

    Ficsor, Levente; Varga, Viktor Sebestyén; Tagscherer, Attila; Tulassay, Zsolt; Molnar, Bela

    2008-03-01

    Automated and quantitative histological analysis can improve diagnostic efficacy in colon sections. Our objective was to develop a parameter set for automated classification of aspecific colitis, ulcerative colitis, and Crohn's disease using digital slides, tissue cytometric parameters, and virtual microscopy. Routinely processed hematoxylin-and-eosin-stained histological sections from specimens that showed normal mucosa (24 cases), aspecific colitis (11 cases), ulcerative colitis (25 cases), and Crohn's disease (9 cases) diagnosed by conventional optical microscopy were scanned and digitized in high resolution (0.24 mum/pixel). Thirty-eight cytometric parameters based on morphometry were determined on cells, glands, and superficial epithelium. Fourteen tissue cytometric parameters based on ratios of tissue compartments were counted as well. Leave-one-out discriminant analysis was used for classification of the samples groups. Cellular morphometric features showed no significant differences in these benign colon alterations. However, gland related morphological differences (Gland Shape) for normal mucosa, ulcerative colitis, and aspecific colitis were found (P parameters showed significant differences (P parameters were the ratio of cell number in glands and in the whole slide, biopsy/gland surface ratio. These differences resulted in 88% overall accuracy in the classification. Crohn's disease could be discriminated only in 56%. Automated virtual microscopy can be used to classify colon mucosa as normal, ulcerative colitis, and aspecific colitis with reasonable accuracy. Further developments of dedicated parameters are necessary to identify Crohn's disease on digital slides. Copyright 2008 International Society for Analytical Cytology.

  6. A robust computational solution for automated quantification of a specific binding ratio based on [123I]FP-CIT SPECT images

    International Nuclear Information System (INIS)

    Oliveira, F. P. M.; Tavares, J. M. R. S.; Borges, Faria D.; Campos, Costa D.

    2014-01-01

    The purpose of the current paper is to present a computational solution to accurately quantify a specific to a non-specific uptake ratio in [ 123 I]fP-CIT single photon emission computed tomography (SPECT) images and simultaneously measure the spatial dimensions of the basal ganglia, also known as basal nuclei. A statistical analysis based on a reference dataset selected by the user is also automatically performed. The quantification of the specific to non-specific uptake ratio here is based on regions of interest defined after the registration of the image under study with a template image. The computational solution was tested on a dataset of 38 [ 123 I]FP-CIT SPECT images: 28 images were from patients with Parkinson’s disease and the remainder from normal patients, and the results of the automated quantification were compared to the ones obtained by three well-known semi-automated quantification methods. The results revealed a high correlation coefficient between the developed automated method and the three semi-automated methods used for comparison (r ≥0.975). The solution also showed good robustness against different positions of the patient, as an almost perfect agreement between the specific to non-specific uptake ratio was found (ICC=1.000). The mean processing time was around 6 seconds per study using a common notebook PC. The solution developed can be useful for clinicians to evaluate [ 123 I]FP-CIT SPECT images due to its accuracy, robustness and speed. Also, the comparison between case studies and the follow-up of patients can be done more accurately and proficiently since the intra- and inter-observer variability of the semi-automated calculation does not exist in automated solutions. The dimensions of the basal ganglia and their automatic comparison with the values of the population selected as reference are also important for professionals in this area.

  7. The image quality and lesion characterization of breast using automated whole-breast ultrasound: A comparison with handheld ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    An, Yeong Yi [Department of Radiology, St. Vincent' s Hospital, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Kim, Sung Hun, E-mail: rad-ksh@catholic.ac.kr [Department of Radiology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Kang, Bong Joo [Department of Radiology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of)

    2015-07-15

    Highlights: • The image quality of AWUS was comparable to that of HHUS for lesion characterization. • In only 0.5%, the poor quality of AWUSimages inhibited precise interpretations. • The HHUS was superior to AWUS in the analysis of peripherally located, irregular, non-circumscribed, or BI-RADS category 4 or 5 lesions. - Abstract: Objective: To prospectively evaluate the image quality of automated whole breast ultrasonography (AWUS) in the characterization of breast lesions compared with handheld breast ultrasonography (HHUS). Materials and methods: This prospective study included a total of 411 lesions in 209 women. All patients underwent both HHUS and AWUS prior to biopsy. An evaluation of identical image pairs of 411 lesions obtained from both modalities was performed, and the image quality of AWUS was compared with that of HHUS as a reference standard. The overall image quality was evaluated for lesion coverage, lesion conspicuity, and artifact effect using a graded score. Additionally, the factors that correlated with differences in image quality between the two modalities were analyzed. Results: In 97.1%, the image quality of AWUS was identical or superior to that of HHUS, whereas AWUS was inferior in 2.9%. In only 0.5%, the poor quality of AWUS images caused by incomplete lesion coverage and shadowing due to a contact artifact inhibited precise interpretations. The two main causes resulting in degraded AWUS image quality were blurring of the margin (83.3%) and acoustic shadowing by Cooper's ligament or improper compression pressure of the transducer (66.7%). Among various factors, peripheral location from the nipple (p = 0.01), lesion size (p = 0.02), shape descriptor (p = 0.02), and final American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) category (p = 0.001) were correlated with differences in image quality between AWUS and HHUS. Conclusion: Although the image quality of AWUS was comparable to that of HHUS for

  8. Development and application of an automated analysis method for individual cerebral perfusion single photon emission tomography images

    International Nuclear Information System (INIS)

    Cluckie, Alice Jane

    2001-01-01

    Neurological images may be analysed by performing voxel by voxel comparisons with a group of control subject images. An automated, 3D, voxel-based method has been developed for the analysis of individual single photon emission tomography (SPET) scans. Clusters of voxels are identified that represent regions of abnormal radiopharmaceutical uptake. Morphological operators are applied to reduce noise in the clusters, then quantitative estimates of the size and degree of the radiopharmaceutical uptake abnormalities are derived. Statistical inference has been performed using a Monte Carlo method that has not previously been applied to SPET scans, or for the analysis of individual images. This has been validated for group comparisons of SPET scans and for the analysis of an individual image using comparison with a group. Accurate statistical inference was obtained independent of experimental factors such as degrees of freedom, image smoothing and voxel significance level threshold. The analysis method has been evaluated for application to cerebral perfusion SPET imaging in ischaemic stroke. It has been shown that useful quantitative estimates, high sensitivity and high specificity may be obtained. Sensitivity and the accuracy of signal quantification were found to be dependent on the operator defined analysis parameters. Recommendations for the values of these parameters have been made. The analysis method developed has been compared with an established method and shown to result in higher specificity for the data and analysis parameter sets tested. In addition, application to a group of ischaemic stroke patient SPET scans has demonstrated its clinical utility. The influence of imaging conditions has been assessed using phantom data acquired with different gamma camera SPET acquisition parameters. A lower limit of five million counts and standardisation of all acquisition parameters has been recommended for the analysis of individual SPET scans. (author)

  9. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles.

    Science.gov (United States)

    Barker, Jocelyn; Hoogi, Assaf; Depeursinge, Adrien; Rubin, Daniel L

    2016-05-01

    Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p < 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p < 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically

  10. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    International Nuclear Information System (INIS)

    Lee, M; Woo, B; Kim, J; Jamshidi, N; Kuo, M

    2015-01-01

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI

  11. Automated image analysis of cyclin D1 protein expression in invasive lobular breast carcinoma provides independent prognostic information.

    Science.gov (United States)

    Tobin, Nicholas P; Lundgren, Katja L; Conway, Catherine; Anagnostaki, Lola; Costello, Sean; Landberg, Göran

    2012-11-01

    The emergence of automated image analysis algorithms has aided the enumeration, quantification, and immunohistochemical analyses of tumor cells in both whole section and tissue microarray samples. To date, the focus of such algorithms in the breast cancer setting has been on traditional markers in the common invasive ductal carcinoma subtype. Here, we aimed to optimize and validate an automated analysis of the cell cycle regulator cyclin D1 in a large collection of invasive lobular carcinoma and relate its expression to clinicopathologic data. The image analysis algorithm was trained to optimally match manual scoring of cyclin D1 protein expression in a subset of invasive lobular carcinoma tissue microarray cores. The algorithm was capable of distinguishing cyclin D1-positive cells and illustrated high correlation with traditional manual scoring (κ=0.63). It was then applied to our entire cohort of 483 patients, with subsequent statistical comparisons to clinical data. We found no correlation between cyclin D1 expression and tumor size, grade, and lymph node status. However, overexpression of the protein was associated with reduced recurrence-free survival (P=.029), as was positive nodal status (Pinvasive lobular carcinoma. Finally, high cyclin D1 expression was associated with increased hazard ratio in multivariate analysis (hazard ratio, 1.75; 95% confidence interval, 1.05-2.89). In conclusion, we describe an image analysis algorithm capable of reliably analyzing cyclin D1 staining in invasive lobular carcinoma and have linked overexpression of the protein to increased recurrence risk. Our findings support the use of cyclin D1 as a clinically informative biomarker for invasive lobular breast cancer. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. AROSICS: An Automated and Robust Open-Source Image Co-Registration Software for Multi-Sensor Satellite Data

    Directory of Open Access Journals (Sweden)

    Daniel Scheffler

    2017-07-01

    Full Text Available Geospatial co-registration is a mandatory prerequisite when dealing with remote sensing data. Inter- or intra-sensoral misregistration will negatively affect any subsequent image analysis, specifically when processing multi-sensoral or multi-temporal data. In recent decades, many algorithms have been developed to enable manual, semi- or fully automatic displacement correction. Especially in the context of big data processing and the development of automated processing chains that aim to be applicable to different remote sensing systems, there is a strong need for efficient, accurate and generally usable co-registration. Here, we present AROSICS (Automated and Robust Open-Source Image Co-Registration Software, a Python-based open-source software including an easy-to-use user interface for automatic detection and correction of sub-pixel misalignments between various remote sensing datasets. It is independent of spatial or spectral characteristics and robust against high degrees of cloud coverage and spectral and temporal land cover dynamics. The co-registration is based on phase correlation for sub-pixel shift estimation in the frequency domain utilizing the Fourier shift theorem in a moving-window manner. A dense grid of spatial shift vectors can be created and automatically filtered by combining various validation and quality estimation metrics. Additionally, the software supports the masking of, e.g., clouds and cloud shadows to exclude such areas from spatial shift detection. The software has been tested on more than 9000 satellite images acquired by different sensors. The results are evaluated exemplarily for two inter-sensoral and two intra-sensoral use cases and show registration results in the sub-pixel range with root mean square error fits around 0.3 pixels and better.

  13. Fully Automated On-Chip Imaging Flow Cytometry System with Disposable Contamination-Free Plastic Re-Cultivation Chip

    Directory of Open Access Journals (Sweden)

    Tomoyuki Kaneko

    2011-06-01

    Full Text Available We have developed a novel imaging cytometry system using a poly(methyl methacrylate (PMMA based microfluidic chip. The system was contamination-free, because sample suspensions contacted only with a flammable PMMA chip and no other component of the system. The transparency and low-fluorescence of PMMA was suitable for microscopic imaging of cells flowing through microchannels on the chip. Sample particles flowing through microchannels on the chip were discriminated by an image-recognition unit with a high-speed camera in real time at the rate of 200 event/s, e.g., microparticles 2.5 μm and 3.0 μm in diameter were differentiated with an error rate of less than 2%. Desired cells were separated automatically from other cells by electrophoretic or dielectrophoretic force one by one with a separation efficiency of 90%. Cells in suspension with fluorescent dye were separated using the same kind of microfluidic chip. Sample of 5 μL with 1 × 106 particle/mL was processed within 40 min. Separated cells could be cultured on the microfluidic chip without contamination. The whole operation of sample handling was automated using 3D micropipetting system. These results showed that the novel imaging flow cytometry system is practically applicable for biological research and clinical diagnostics.

  14. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four...... Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...

  15. Quantifying Porosity through Automated Image Collection and Batch Image Processing: Case Study of Three Carbonates and an Aragonite Cemented Sandstone

    Directory of Open Access Journals (Sweden)

    Jim Buckman

    2017-08-01

    Full Text Available Modern scanning electron microscopes often include software that allows for the possibility of obtaining large format high-resolution image montages over areas of several square centimeters. Such montages are typically automatically acquired and stitched, comprising many thousand individual tiled images. Images, collected over a regular grid pattern, are a rich source of information on factors such as variability in porosity and distribution of mineral phases, but can be hard to visually interpret. Additional quantitative data can be accessed through the application of image analysis. We use backscattered electron (BSE images, collected from polished thin sections of two limestone samples from the Cretaceous of Brazil, a Carboniferous limestone from Scotland, and a carbonate cemented sandstone from Northern Ireland, with up to 25,000 tiles per image, collecting numerical quantitative data on the distribution of porosity. Images were automatically collected using the FEI software Maps, batch processed by image analysis (through ImageJ, with results plotted on 2D contour plots with MATLAB. These plots numerically and visually clearly express the collected porosity data in an easily accessible form, and have application for the display of other data such as pore size, shape, grain size/shape, orientation and mineral distribution, as well as being of relevance to sandstone, mudrock and other porous media.

  16. Automated Image Analysis of HER2 Fluorescence In Situ Hybridization to Refine Definitions of Genetic Heterogeneity in Breast Cancer Tissue

    Directory of Open Access Journals (Sweden)

    Gedmante Radziuviene

    2017-01-01

    Full Text Available Human epidermal growth factor receptor 2 gene- (HER2- targeted therapy for breast cancer relies primarily on HER2 overexpression established by immunohistochemistry (IHC with borderline cases being further tested for amplification by fluorescence in situ hybridization (FISH. Manual interpretation of HER2 FISH is based on a limited number of cells and rather complex definitions of equivocal, polysomic, and genetically heterogeneous (GH cases. Image analysis (IA can extract high-capacity data and potentially improve HER2 testing in borderline cases. We investigated statistically derived indicators of HER2 heterogeneity in HER2 FISH data obtained by automated IA of 50 IHC borderline (2+ cases of invasive ductal breast carcinoma. Overall, IA significantly underestimated the conventional HER2, CEP17 counts, and HER2/CEP17 ratio; however, it collected more amplified cells in some cases below the lower limit of GH definition by manual procedure. Indicators for amplification, polysomy, and bimodality were extracted by factor analysis and allowed clustering of the tumors into amplified, nonamplified, and equivocal/polysomy categories. The bimodality indicator provided independent cell diversity characteristics for all clusters. Tumors classified as bimodal only partially coincided with the conventional GH heterogeneity category. We conclude that automated high-capacity nonselective tumor cell assay can generate evidence-based HER2 intratumor heterogeneity indicators to refine GH definitions.

  17. Automated synthesis with HPLC purification of 18F-FMISO as specific molecular imaging probe of tumor hypoxia

    International Nuclear Information System (INIS)

    Wang Mingwei; Zhang Yingjian; Zhang Yongping

    2012-01-01

    An improved automated synthesis of 1-H-1-(3-[ 18 F] fluoro-2-hydroxypropyl)-2-nitro-imidazole ( 18 F-FMISO), a specific molecular imaging probe of tumor hypoxia, was developed using an upgraded Explora GN module integrated with Explora LC for HPLC purification in this study. The radiochemical synthesis of 18 F-FMISO was started with precursor 1-( 2'-nitro-1'-imidazolyl)-2-O-tetrahydropyranyl-3-O-tosyl-propanediol (NITTP) and included nucleophilic [ 18 F] radio-fluorination at 120℃ for 5 min and hydrolysis at 130℃ for 8 min. The automated synthesis of 18 F-FMISO, presenting fast, reliable and multi-run features, could be completed with the total synthesis time of less than 65 min and radiochemical yield of 25%∼35% (without decay correction). The quality control of 18 F-FMISO was identical with the radiopharmaceutical requirements, especially the radiochemical purity of greater than 99% and high chemical purity and specific activity own to HPLC purification. (authors)

  18. Comparison of known food weights with image-based portion-size automated estimation and adolescents' self-reported portion size.

    Science.gov (United States)

    Lee, Christina D; Chae, Junghoon; Schap, TusaRebecca E; Kerr, Deborah A; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2012-03-01

    Diet is a critical element of diabetes self-management. An emerging area of research is the use of images for dietary records using mobile telephones with embedded cameras. These tools are being designed to reduce user burden and to improve accuracy of portion-size estimation through automation. The objectives of this study were to (1) assess the error of automatically determined portion weights compared to known portion weights of foods and (2) to compare the error between automation and human. Adolescents (n = 15) captured images of their eating occasions over a 24 h period. All foods and beverages served were weighed. Adolescents self-reported portion sizes for one meal. Image analysis was used to estimate portion weights. Data analysis compared known weights, automated weights, and self-reported portions. For the 19 foods, the mean ratio of automated weight estimate to known weight ranged from 0.89 to 4.61, and 9 foods were within 0.80 to 1.20. The largest error was for lettuce and the most accurate was strawberry jam. The children were fairly accurate with portion estimates for two foods (sausage links, toast) using one type of estimation aid and two foods (sausage links, scrambled eggs) using another aid. The automated method was fairly accurate for two foods (sausage links, jam); however, the 95% confidence intervals for the automated estimates were consistently narrower than human estimates. The ability of humans to estimate portion sizes of foods remains a problem and a perceived burden. Errors in automated portion-size estimation can be systematically addressed while minimizing the burden on people. Future applications that take over the burden of these processes may translate to better diabetes self-management. © 2012 Diabetes Technology Society.

  19. Automated Tracking of Root for Confocal Time-lapse Imaging of Cellular Processes

    OpenAIRE

    Doumane, Mehdi; Lionnet, Claire; Bayle, Vincent; Jaillais, Yvon; Caillaud, Marie-C?cile

    2017-01-01

    Here we describe a protocol that enables to automatically perform time-lapse imaging of growing root tips for several hours. Plants roots expressing fluorescent proteins or stained with dyes are imaged while they grow using automatic movement of the microscope stage that compensates for root growth and allows to follow a given region of the root over time. The protocol makes possible the image acquisition of multiple growing root tips, therefore increasing the number of recorded mitotic event...

  20. Fully automated registration of vibrational microspectroscopic images in histologically stained tissue sections

    OpenAIRE

    Yang, Chen; Niedieker, Daniel; Gro?er?schkamp, Frederik; Horn, Melanie; Tannapfel, Andrea; Kallenbach-Thieltges, Angela; Gerwert, Klaus; Mosig, Axel

    2015-01-01

    Background In recent years, hyperspectral microscopy techniques such as infrared or Raman microscopy have been applied successfully for diagnostic purposes. In many of the corresponding studies, it is common practice to measure one and the same sample under different types of microscopes. Any joint analysis of the two image modalities requires to overlay the images, so that identical positions in the sample are located at the same coordinate in both images. This step, commonly referred to as ...

  1. Automated knot detection with visual post-processing of Douglas-fir veneer images

    Science.gov (United States)

    C.L. Todoroki; Eini C. Lowell; Dennis Dykstra

    2010-01-01

    Knots on digital images of 51 full veneer sheets, obtained from nine peeler blocks crosscut from two 35-foot (10.7 m) long logs and one 18-foot (5.5 m) log from a single Douglas-fir tree, were detected using a two-phase algorithm. The algorithm was developed using one image, the Development Sheet, refined on five other images, the Training Sheets, and then applied to...

  2. Automated Image Sampling and Classification Can Be Used to Explore Perceived Naturalness of Urban Spaces.

    Directory of Open Access Journals (Sweden)

    Roger Hyam

    Full Text Available The psychological restorative effects of exposure to nature are well established and extend to just viewing of images of nature. A previous study has shown that Perceived Naturalness (PN of images correlates with their restorative value. This study tests whether it is possible to detect degree of PN of images using an image classifier. It takes images that have been scored by humans for PN (including a subset that have been assessed for restorative value and passes them through the Google Vision API image classification service. The resulting labels are assigned to broad semantic classes to create a Calculated Semantic Naturalness (CSN metric for each image. It was found that CSN correlates with PN. CSN was then calculated for a geospatial sampling of Google Street View images across the city of Edinburgh. CSN was found to correlate with PN in this sample also indicating the technique may be useful in large scale studies. Because CSN correlates with PN which correlates with restorativeness it is suggested that CSN or a similar measure may be useful in automatically detecting restorative images and locations. In an exploratory aside CSN was not found to correlate with an indicator of socioeconomic deprivation.

  3. Scaling up Ecological Measurements of Coral Reefs Using Semi-Automated Field Image Collection and Analysis

    Directory of Open Access Journals (Sweden)

    Manuel González-Rivero

    2016-01-01

    Full Text Available Ecological measurements in marine settings are often constrained in space and time, with spatial heterogeneity obscuring broader generalisations. While advances in remote sensing, integrative modelling and meta-analysis enable generalisations from field observations, there is an underlying need for high-resolution, standardised and geo-referenced field data. Here, we evaluate a new approach aimed at optimising data collection and analysis to assess broad-scale patterns of coral reef community composition using automatically annotated underwater imagery, captured along 2 km transects. We validate this approach by investigating its ability to detect spatial (e.g., across regions and temporal (e.g., over years change, and by comparing automated annotation errors to those of multiple human annotators. Our results indicate that change of coral reef benthos can be captured at high resolution both spatially and temporally, with an average error below 5%, among key benthic groups. Cover estimation errors using automated annotation varied between 2% and 12%, slightly larger than human errors (which varied between 1% and 7%, but small enough to detect significant changes among dominant groups. Overall, this approach allows a rapid collection of in-situ observations at larger spatial scales (km than previously possible, and provides a pathway to link, calibrate, and validate broader analyses across even larger spatial scales (10–10,000 km2.

  4. Automated conversion of Docker images to CVMFS for LIGO and the Open Science Grid

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this lightning talk, I will discuss the development of a webhook-based tool for automatically converting Docker images from DockerHub and private registries to CVMFS filesystems. The tool is highly reliant on previous work by the Open Science Grid for scripted nightly conversion of images from DockerHub.

  5. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging.

    Science.gov (United States)

    Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F

    2014-09-01

    Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Automated identification of brain tumours from single MR images based on segmentation with refined patient-specific priors

    Directory of Open Access Journals (Sweden)

    Ana eSanjuán

    2013-12-01

    Full Text Available Brain tumours can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI procedure which enables brain tumour identification from single MR images. Our method rests on (A a modified segmentation-normalisation procedure with an explicit extra prior for the tumour and (B an outlier detection procedure for abnormal voxel (i.e. tumour classification. To minimise tissue misclassification, the segmentation-normalisation procedure requires prior information of the tumour location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers’ manual tracings. The automated procedure identified the tumours successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03. The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behaviour mapping studies, or when lesion identification and/or spatial normalisation are problematic.

  7. Mammographic Breast Density Assessment Using Automated Volumetric Software and Breast Imaging Reporting and Data System (BIRADS) Categorization by Expert Radiologists.

    Science.gov (United States)

    Damases, Christine N; Brennan, Patrick C; Mello-Thoms, Claudia; McEntee, Mark F

    2016-01-01

    To investigate agreement on mammographic breast density (MD) assessment between automated volumetric software and Breast Imaging Reporting and Data System (BIRADS) categorization by expert radiologists. Forty cases of left craniocaudal and mediolateral oblique mammograms from 20 women were used. All images had their volumetric density classified using Volpara density grade (VDG) and average volumetric breast density percentage. The same images were then classified into BIRADS categories (I-IV) by 20 American Board of Radiology examiners. The results demonstrated a moderate agreement (κ = 0.537; 95% CI = 0.234-0.699) between VDG classification and radiologists' BIRADS density assessment. Interreader agreement using BIRADS also demonstrated moderate agreement (κ = 0.565; 95% CI = 0.519-0.610) ranging from 0.328 to 0.669. Radiologists' average BIRADS was lower than average VDG scores by 0.33, with their mean being 2.13, whereas the mean VDG was 2.48 (U = -3.742; P BIRADS showed a very strong positive correlation (ρ = 0.91; P BIRADS and average volumetric breast density percentage (ρ = 0.94; P BIRADS; interreader variations still exist within BIRADS. Because of the increasing importance of MD measurement in clinical management of patients, widely accepted, reproducible, and accurate measures of MD are required. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. Automated detection and analysis of fluorescent in situ hybridization spots depicted in digital microscopic images of Pap-smear specimens

    Science.gov (United States)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Zhang, Roy; Mulvihill, John J.; Chen, Wei R.; Liu, Hong

    2009-03-01

    Fluorescence in situ hybridization (FISH) technology has been widely recognized as a promising molecular and biomedical optical imaging tool to screen and diagnose cervical cancer. However, manual FISH analysis is time-consuming and may introduce large inter-reader variability. In this study, a computerized scheme is developed and tested. It automatically detects and analyzes FISH spots depicted on microscopic fluorescence images. The scheme includes two stages: (1) a feature-based classification rule to detect useful interphase cells, and (2) a knowledge-based expert classifier to identify splitting FISH spots and improve the accuracy of counting independent FISH spots. The scheme then classifies detected analyzable cells as normal or abnormal. In this study, 150 FISH images were acquired from Pap-smear specimens and examined by both an experienced cytogeneticist and the scheme. The results showed that (1) the agreement between the cytogeneticist and the scheme was 96.9% in classifying between analyzable and unanalyzable cells (Kappa=0.917), and (2) agreements in detecting normal and abnormal cells based on FISH spots were 90.5% and 95.8% with Kappa=0.867. This study demonstrated the feasibility of automated FISH analysis, which may potentially improve detection efficiency and produce more accurate and consistent results than manual FISH analysis.

  9. Automated MicroSPECT/MicroCT Image Analysis of the Mouse Thyroid Gland.

    Science.gov (United States)

    Cheng, Peng; Hollingsworth, Brynn; Scarberry, Daniel; Shen, Daniel H; Powell, Kimerly; Smart, Sean C; Beech, John; Sheng, Xiaochao; Kirschner, Lawrence S; Menq, Chia-Hsiang; Jhiang, Sissy M

    2017-11-01

    The ability of thyroid follicular cells to take up iodine enables the use of radioactive iodine (RAI) for imaging and targeted killing of RAI-avid thyroid cancer following thyroidectomy. To facilitate identifying novel strategies to improve 131 I therapeutic efficacy for patients with RAI refractory disease, it is desired to optimize image acquisition and analysis for preclinical mouse models of thyroid cancer. A customized mouse cradle was designed and used for microSPECT/CT image acquisition at 1 hour (t1) and 24 hours (t24) post injection of 123 I, which mainly reflect RAI influx/efflux equilibrium and RAI retention in the thyroid, respectively. FVB/N mice with normal thyroid glands and TgBRAF V600E mice with thyroid tumors were imaged. In-house CTViewer software was developed to streamline image analysis with new capabilities, along with display of 3D voxel-based 123 I gamma photon intensity in MATLAB. The customized mouse cradle facilitates consistent tissue configuration among image acquisitions such that rigid body registration can be applied to align serial images of the same mouse via the in-house CTViewer software. CTViewer is designed specifically to streamline SPECT/CT image analysis with functions tailored to quantify thyroid radioiodine uptake. Automatic segmentation of thyroid volumes of interest (VOI) from adjacent salivary glands in t1 images is enabled by superimposing the thyroid VOI from the t24 image onto the corresponding aligned t1 image. The extent of heterogeneity in 123 I accumulation within thyroid VOIs can be visualized by 3D display of voxel-based 123 I gamma photon intensity. MicroSPECT/CT image acquisition and analysis for thyroidal RAI uptake is greatly improved by the cradle and the CTViewer software, respectively. Furthermore, the approach of superimposing thyroid VOIs from t24 images to select thyroid VOIs on corresponding aligned t1 images can be applied to studies in which the target tissue has differential radiotracer retention

  10. Automated boundary segmentation and wound analysis for longitudinal corneal OCT images

    Science.gov (United States)

    Wang, Fei; Shi, Fei; Zhu, Weifang; Pan, Lingjiao; Chen, Haoyu; Huang, Haifan; Zheng, Kangkeng; Chen, Xinjian

    2017-03-01

    Optical coherence tomography (OCT) has been widely applied in the examination and diagnosis of corneal diseases, but the information directly achieved from the OCT images by manual inspection is limited. We propose an automatic processing method to assist ophthalmologists in locating the boundaries in corneal OCT images and analyzing the recovery of corneal wounds after treatment from longitudinal OCT images. It includes the following steps: preprocessing, epithelium and endothelium boundary segmentation and correction, wound detection, corneal boundary fitting and wound analysis. The method was tested on a data set with longitudinal corneal OCT images from 20 subjects. Each subject has five images acquired after corneal operation over a period of time. The segmentation and classification accuracy of the proposed algorithm is high and can be used for analyzing wound recovery after corneal surgery.

  11. Chemical machine vision: automated extraction of chemical metadata from raster images.

    Science.gov (United States)

    Gkoutos, Georgios V; Rzepa, Henry; Clark, Richard M; Adjei, Osei; Johal, Harpal

    2003-01-01

    We present a novel application of machine vision methods for the identification of chemical composition diagrams from two-dimensional digital raster images. The method is based on the use of Gabor wavelets and an energy function to derive feature vectors from digital images. These are used for training and classification purposes using a Kohonen network for classification with the Euclidean distance norm. We compare this method with previous approaches to transforming such images to a molecular connection table, which are designed to achieve complete atom connection table fidelity but at the expense of requiring human interaction. The present texture-based approach is complementary in attempting to recognize higher order features such as the presence of a chemical representation in the original raster image. This information can be used for providing chemical metadata descriptors of the original image as part of a robot-based Internet resource discovery tool.

  12. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    Science.gov (United States)

    Xue, Yong; Chen, Shihui; Liu, Yong

    2017-01-01

    Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging. PMID:29114182

  13. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    Directory of Open Access Journals (Sweden)

    Yong Xue

    2017-01-01

    Full Text Available Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging.

  14. 3-D Digital Imaging of Breast Calcifications: Improvements in Image Quality, and Development of Automated Reconstruction Methods

    National Research Council Canada - National Science Library

    Maidment, Andrew

    2000-01-01

    In our work to date, we have generated a manually segmented and paired dataset of 110 patients images, which we have used as a "gold standard' in the evaluation of computer algorithms for identifying...

  15. Automated mosaicing of feature-poor optical coherence tomography volumes with an integrated white light imaging system.

    Science.gov (United States)

    Lurie, Kristen L; Angst, Roland; Ellerbee, Audrey K

    2014-07-01

    We demonstrate the first automated, volumetric mosaicing algorithm for optical coherence tomography (OCT) that both accommodates 6-degree-of-freedom rigid transformations and implements a bundle adjustment step amenable to generating large fields of view with endoscopic and freehand imaging systems. Our mosaicing algorithm exploits the known, rigid connection between a combined white light and OCT imaging system to reduce the computational complexity of traditional volumetric mosaicing pipelines. Specifically, the search for 3-D point correspondences is replaced by two, 2-D processing steps: We first coregister a pair of white light images in 2-D and then generate a surface map based on the volumetric OCT data, which is used to convert 2-D image homographies into 3-D volumetric transformations. A significant benefit of our dual-modality approach is its tolerance for feature-poor datasets such as bladder tissue; in contrast, approaches to mosaic feature-rich volumes with significant variations in the local intensity gradient (e.g., retinal data containing prolific vasculature) are not suitable for such feature-poor datasets. We demonstrate the performance of our algorithm using ex vivo bladder tissue and a custom tissue-mimicking phantom. The algorithm shows excellent performance over the range of volume-to-volume transformations expected during endoscopic examination and comparable accuracy with several orders of magnitude superior run times than an open-source gold-standard algorithm (N-SIFT). We anticipate the proposed algorithm can benefit bladder surveillance and surgical planning. Furthermore, its generality gives it broad applicability and potential to extend the use of OCT to clinical applications relevant to large organs typically imaged with freehand, forward-viewing endoscopes.

  16. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2016-07-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.   Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.   This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.   Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  17. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2017-06-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  18. Surveillance of Women with the BRCA1 or BRCA2 Mutation by Using Biannual Automated Breast US, MR Imaging, and Mammography

    NARCIS (Netherlands)

    Zelst, J.C.M. van; Mus, R.D.M.; Woldringh, G.H.; Rutten, M.; Bult, P.; Vreemann, S.; Jong, M de; Karssemeijer, N.; Hoogerbrugge, N.; Mann, R.M.

    2017-01-01

    Purpose To evaluate a multimodal surveillance regimen including yearly full-field digital (FFD) mammography, dynamic contrast agent-enhanced (DCE) magnetic resonance (MR) imaging, and biannual automated breast (AB) ultrasonography (US) in women with BRCA1 and BRCA2 mutations. Materials and Methods

  19. Feature tracking for automated volume of interest stabilization on 4D-OCT images

    Science.gov (United States)

    Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias

    2017-03-01

    A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.

  20. Automated image analysis for quantification of reactive oxygen species in plant leaves.

    Science.gov (United States)

    Sekulska-Nalewajko, Joanna; Gocławski, Jarosław; Chojak-Koźniewska, Joanna; Kuźniak, Elżbieta

    2016-10-15

    The paper presents an image processing method for the quantitative assessment of ROS accumulation areas in leaves stained with DAB or NBT for H 2 O 2 and O 2 - detection, respectively. Three types of images determined by the combination of staining method and background color are considered. The method is based on the principle of supervised machine learning with manually labeled image patterns used for training. The method's algorithm is developed as a JavaScript macro in the public domain Fiji (ImageJ) environment. It allows to select the stained regions of ROS-mediated histochemical reactions, subsequently fractionated according to the weak, medium and intense staining intensity and thus ROS accumulation. It also evaluates total leaf blade area. The precision of ROS accumulation area detection is validated by the Dice Similarity Coefficient in the case of manual patterns. The proposed framework reduces the computation complexity, once prepared, requires less image processing expertise than the competitive methods and represents a routine quantitative imaging assay for a general histochemical image classification. Copyright © 2016 Elsevier Inc. All rights reserved.