WorldWideScience

Sample records for automated image analysis1woa

  1. Automated medical image segmentation techniques

    OpenAIRE

    Sharma Neeraj; Aggarwal Lalit

    2010-01-01

    Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT) and Magnetic resonance (MR) imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits ...

  2. Automated Orientation of Aerial Images

    DEFF Research Database (Denmark)

    Høhle, Joachim

    2002-01-01

    Methods for automated orientation of aerial images are presented. They are based on the use of templates, which are derived from existing databases, and area-based matching. The characteristics of available database information and the accuracy requirements for map compilation and orthoimage...... production are discussed on the example of Denmark. Details on the developed methods for interior and exterior orientation are described. Practical examples like the measurement of réseau images, updating of topographic databases and renewal of orthoimages are used to prove the feasibility of the developed...

  3. An automated imaging system for radiation biodosimetry.

    Science.gov (United States)

    Garty, Guy; Bigelow, Alan W; Repin, Mikhail; Turner, Helen C; Bian, Dakai; Balajee, Adayabalam S; Lyulko, Oleksandra V; Taveras, Maria; Yao, Y Lawrence; Brenner, David J

    2015-07-01

    We describe here an automated imaging system developed at the Center for High Throughput Minimally Invasive Radiation Biodosimetry. The imaging system is built around a fast, sensitive sCMOS camera and rapid switchable LED light source. It features complete automation of all the steps of the imaging process and contains built-in feedback loops to ensure proper operation. The imaging system is intended as a back end to the RABiT-a robotic platform for radiation biodosimetry. It is intended to automate image acquisition and analysis for four biodosimetry assays for which we have developed automated protocols: The Cytokinesis Blocked Micronucleus assay, the γ-H2AX assay, the Dicentric assay (using PNA or FISH probes) and the RABiT-BAND assay. PMID:25939519

  4. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    S P Vimal; P K Thiruvikraman

    2012-12-01

    We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the peaks corresponding to the background and the foreground are not widely separated.

  5. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  6. Automated Segmentation of Cardiac Magnetic Resonance Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Nilsson, Jens Chr.; Grønning, Bjørn A.

    2001-01-01

    Magnetic resonance imaging (MRI) has been shown to be an accurate and precise technique to assess cardiac volumes and function in a non-invasive manner and is generally considered to be the current gold-standard for cardiac imaging [1]. Measurement of ventricular volumes, muscle mass and function...... is based on determination of the left-ventricular endocardial and epicardial borders. Since manual border detection is laborious, automated segmentation is highly desirable as a fast, objective and reproducible alternative. Automated segmentation will thus enhance comparability between and within cardiac...... studies and increase accuracy by allowing acquisition of thinner MRI-slices. This abstract demonstrates that statistical models of shape and appearance, namely the deformable models: Active Appearance Models, can successfully segment cardiac MRIs....

  7. Automated spectral imaging for clinical diagnostics

    Science.gov (United States)

    Breneman, John; Heffelfinger, David M.; Pettipiece, Ken; Tsai, Chris; Eden, Peter; Greene, Richard A.; Sorensen, Karen J.; Stubblebine, Will; Witney, Frank

    1998-04-01

    Bio-Rad Laboratories supplies imaging equipment for many applications in the life sciences. As part of our effort to offer more flexibility to the investigator, we are developing a microscope-based imaging spectrometer for the automated detection and analysis of either conventionally or fluorescently labeled samples. Immediate applications will include the use of fluorescence in situ hybridization (FISH) technology. The field of cytogenetics has benefited greatly from the increased sensitivity of FISH producing simplified analysis of complex chromosomal rearrangements. FISH methods for identification lends itself to automation more easily than the current cytogenetics industry standard of G- banding, however, the methods are complementary. Several technologies have been demonstrated successfully for analyzing the signals from labeled samples, including filter exchanging and interferometry. The detection system lends itself to other fluorescent applications including the display of labeled tissue sections, DNA chips, capillary electrophoresis or any other system using color as an event marker. Enhanced displays of conventionally stained specimens will also be possible.

  8. Automated landmark-guided deformable image registration

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  9. Automated Quality Assurance Applied to Mammographic Imaging

    Directory of Open Access Journals (Sweden)

    Anne Davis

    2002-07-01

    Full Text Available Quality control in mammography is based upon subjective interpretation of the image quality of a test phantom. In order to suppress subjectivity due to the human observer, automated computer analysis of the Leeds TOR(MAM test phantom is investigated. Texture analysis via grey-level co-occurrence matrices is used to detect structures in the test object. Scoring of the substructures in the phantom is based on grey-level differences between regions and information from grey-level co-occurrence matrices. The results from scoring groups of particles within the phantom are presented.

  10. Automated vertebra identification in CT images

    Science.gov (United States)

    Ehm, Matthias; Klinder, Tobias; Kneser, Reinhard; Lorenz, Cristian

    2009-02-01

    In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine column is a very challenging task, especially if no surrounding reference structures can be taken into account. Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically labeling the spine column by evaluating similarities between given models and vertebral objects. In one method, object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each vertebral object. In the other method, appearance models containing mean gray value information are registered to each vertebral object using cross and local correlation as similarity measures for the optimization function. The GHT is advantageous in terms of computational performance but cuts back concerning the identification rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting of 63 disparate input images using rigid image registration with local correlation as similarity measure.

  11. Computerized Station For Semi-Automated Testing Image Intensifier Tubes

    OpenAIRE

    Chrzanowski Krzysztof

    2015-01-01

    Testing of image intensifier tubes is still done using mostly manual methods due to a series of both technical and legal problems with test automation. Computerized stations for semi-automated testing of IITs are considered as novelty and are under continuous improvements. This paper presents a novel test station that enables semi-automated measurement of image intensifier tubes. Wide test capabilities and advanced design solutions rise the developed test station significantly above the curre...

  12. Automated object detection for astronomical images

    Science.gov (United States)

    Orellana, Sonny; Zhao, Lei; Boussalis, Helen; Liu, Charles; Rad, Khosrow; Dong, Jane

    2005-10-01

    Sponsored by the National Aeronautical Space Association (NASA), the Synergetic Education and Research in Enabling NASA-centered Academic Development of Engineers and Space Scientists (SERENADES) Laboratory was established at California State University, Los Angeles (CSULA). An important on-going research activity in this lab is to develop an easy-to-use image analysis software with the capability of automated object detection to facilitate astronomical research. This paper presented a fast object detection algorithm based on the characteristics of astronomical images. This algorithm consists of three steps. First, the foreground and background are separated using histogram-based approach. Second, connectivity analysis is conducted to extract individual object. The final step is post processing which refines the detection results. To improve the detection accuracy when some objects are blocked by clouds, top-hat transform is employed to split the sky into cloudy region and non-cloudy region. A multi-level thresholding algorithm is developed to select the optimal threshold for different regions. Experimental results show that our proposed approach can successfully detect the blocked objects by clouds.

  13. An automated digital imaging system for environmental monitoring applications

    Science.gov (United States)

    Bogle, Rian; Velasco, Miguel; Vogel, John

    2013-01-01

    Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.

  14. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  15. Computerized Station For Semi-Automated Testing Image Intensifier Tubes

    Directory of Open Access Journals (Sweden)

    Chrzanowski Krzysztof

    2015-09-01

    Full Text Available Testing of image intensifier tubes is still done using mostly manual methods due to a series of both technical and legal problems with test automation. Computerized stations for semi-automated testing of IITs are considered as novelty and are under continuous improvements. This paper presents a novel test station that enables semi-automated measurement of image intensifier tubes. Wide test capabilities and advanced design solutions rise the developed test station significantly above the current level of night vision metrology.

  16. Image segmentation for automated dental identification

    Science.gov (United States)

    Haj Said, Eyad; Nassar, Diaa Eldin M.; Ammar, Hany H.

    2006-02-01

    Dental features are one of few biometric identifiers that qualify for postmortem identification; therefore, creation of an Automated Dental Identification System (ADIS) with goals and objectives similar to the Automated Fingerprint Identification System (AFIS) has received increased attention. As a part of ADIS, teeth segmentation from dental radiographs films is an essential step in the identification process. In this paper, we introduce a fully automated approach for teeth segmentation with goal to extract at least one tooth from the dental radiograph film. We evaluate our approach based on theoretical and empirical basis, and we compare its performance with the performance of other approaches introduced in the literature. The results show that our approach exhibits the lowest failure rate and the highest optimality among all full automated approaches introduced in the literature.

  17. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Science.gov (United States)

    Beijbom, Oscar; Edmunds, Peter J; Roelfsema, Chris; Smith, Jennifer; Kline, David I; Neal, Benjamin P; Dunlap, Matthew J; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B Greg; Kriegman, David

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157

  18. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Directory of Open Access Journals (Sweden)

    Oscar Beijbom

    Full Text Available Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.

  19. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

    OpenAIRE

    Oscar Beijbom; Edmunds, Peter J.; Chris Roelfsema; Jennifer Smith; Kline, David I.; Neal, Benjamin P.; Matthew J Dunlap; Vincent Moriarty; Tung-Yung Fan; Chih-Jui Tan; Stephen Chan; Tali Treibitz; Anthony Gamst; B. Greg Mitchell; David Kriegman

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images capture...

  20. Automated identification of animal species in camera trap images

    NARCIS (Netherlands)

    Yu, X.; Wang, J.; Kays, R.; Jansen, P.A.; Wang, T.; Huang, T.

    2013-01-01

    Image sensors are increasingly being used in biodiversity monitoring, with each study generating many thousands or millions of pictures. Efficiently identifying the species captured by each image is a critical challenge for the advancement of this field. Here, we present an automated species identif

  1. Automated diabetic retinopathy imaging in Indian eyes: A pilot study

    Directory of Open Access Journals (Sweden)

    Rupak Roy

    2014-01-01

    Full Text Available Aim: To evaluate the efficacy of an automated retinal image grading system in diabetic retinopathy (DR screening. Materials and Methods: Color fundus images of patients of a DR screening project were analyzed for the purpose of the study. For each eye two set of images were acquired, one centerd on the disk and the other centerd on the macula. All images were processed by automated DR screening software (Retmarker. The results were compared to ophthalmologist grading of the same set of photographs. Results: 5780 images of 1445 patients were analyzed. Patients were screened into two categories DR or no DR. Image quality was high, medium and low in 71 (4.91%, 1117 (77.30% and 257 (17.78% patients respectively. Specificity and sensitivity for detecting DR in the high, medium and low group were (0.59, 0.91; (0.11, 0.95 and (0.93, 0.14. Conclusion: Automated retinal image screening system for DR had a high sensitivity in high and medium quality images. Automated DR grading software′s hold promise in future screening programs.

  2. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  3. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  4. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  5. Automated image registration for FDOPA PET studies

    Science.gov (United States)

    Lin, Kang-Ping; Huang, Sung-Cheng; Yu, Dan-Chu; Melega, William; Barrio, Jorge R.; Phelps, Michael E.

    1996-12-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention.

  6. Automated Localization of Optic Disc in Retinal Images

    Directory of Open Access Journals (Sweden)

    Deepali A.Godse

    2013-03-01

    Full Text Available An efficient detection of optic disc (OD in colour retinal images is a significant task in an automated retinal image analysis system. Most of the algorithms developed for OD detection are especially applicable to normal and healthy retinal images. It is a challenging task to detect OD in all types of retinal images, that is, normal, healthy images as well as abnormal, that is, images affected due to disease. This paper presents an automated system to locate an OD and its centre in all types of retinal images. The ensemble of steps based on different criteria produces more accurate results. The proposed algorithm gives excellent results and avoids false OD detection. The technique is developed and tested on standard databases provided for researchers on internet, Diaretdb0 (130 images, Diaretdb1 (89 images, Drive (40 images and local database (194 images. The local database images are collected from ophthalmic clinics. It is able to locate OD and its centre in 98.45% of all tested cases. The results achieved by different algorithms can be compared when algorithms are applied on same standard databases. This comparison is also discussed in this paper which shows that the proposed algorithm is more efficient.

  7. Automated image capture and defects detection by cavity inspection camera

    International Nuclear Information System (INIS)

    The defects as pit and scar make electric/magnetic field enhance and it cause field emission and quench in superconducting cavities. We used inspection camera to find these defects, but the current system which operated by human often mistake file naming and require long acquisition time. This study aims to solve these problems with introduction of cavity driving automation and defect inspection. We used rs232c of serial communication to drive of motor and camera for the automation of the inspection camera, and we used defect inspection software with defects reference images and pattern match software with the OpenCV lib. By the automation, we cut down the acquisition time from 8 hours to 2 hours, however defect inspection software is under preparation. The defect inspection software has a problem of complexity of image back ground. (author)

  8. Automated morphometry of transgenic mouse brains in MR images

    NARCIS (Netherlands)

    Scheenstra, Alize Elske Hiltje

    2011-01-01

    Quantitative and local morphometry of mouse brain MRI is a relatively new field of research, where automated methods can be exploited to rapidly provide accurate and repeatable results. In this thesis we reviewed several existing methods and applications of quantitative morphometry to brain MR image

  9. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla;

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  10. Automated radiopharmaceutical production systems for positron imaging

    International Nuclear Information System (INIS)

    This study provides information that will lead towards the widespread availability of systems for routine production of positron emitting isotopes and radiopharmaceuticals in a medical setting. The first part describes the collection, evaluation, and preparation in convenient form of the pertinent physical, engineering, and chemical data related to reaction yields and isotope production. The emphasis is on the production of the four short-lived isotopes C-11, N-13, O-15 and F-18. The second part is an assessment of radiation sources including cyclotrons, linear accelerators, and other more exotic devices. Various aspects of instrumentation including ease of installation, cost, and shielding are included. The third part of the study reviews the preparation of precursors and radiopharmaceuticals by automated chemical systems. 182 refs., 3 figs., 15 tabs

  11. An automated vessel segmentation of retinal images using multiscale vesselness

    International Nuclear Information System (INIS)

    The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. In this paper, we introduce an implementation of the anisotropic diffusion which allows reducing the noise and better preserving small structures like vessels in 2D images. A vessel detection filter, based on a multi-scale vesselness function, is then applied to enhance vascular structures.

  12. Automated image-based tracking and its application in ecology.

    Science.gov (United States)

    Dell, Anthony I; Bender, John A; Branson, Kristin; Couzin, Iain D; de Polavieja, Gonzalo G; Noldus, Lucas P J J; Pérez-Escudero, Alfonso; Perona, Pietro; Straw, Andrew D; Wikelski, Martin; Brose, Ulrich

    2014-07-01

    The behavior of individuals determines the strength and outcome of ecological interactions, which drive population, community, and ecosystem organization. Bio-logging, such as telemetry and animal-borne imaging, provides essential individual viewpoints, tracks, and life histories, but requires capture of individuals and is often impractical to scale. Recent developments in automated image-based tracking offers opportunities to remotely quantify and understand individual behavior at scales and resolutions not previously possible, providing an essential supplement to other tracking methodologies in ecology. Automated image-based tracking should continue to advance the field of ecology by enabling better understanding of the linkages between individual and higher-level ecological processes, via high-throughput quantitative analysis of complex ecological patterns and processes across scales, including analysis of environmental drivers.

  13. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  14. Automated Pointing of Cardiac Imaging Catheters.

    Science.gov (United States)

    Loschak, Paul M; Brattain, Laura J; Howe, Robert D

    2013-12-31

    Intracardiac echocardiography (ICE) catheters enable high-quality ultrasound imaging within the heart, but their use in guiding procedures is limited due to the difficulty of manually pointing them at structures of interest. This paper presents the design and testing of a catheter steering model for robotic control of commercial ICE catheters. The four actuated degrees of freedom (4-DOF) are two catheter handle knobs to produce bi-directional bending in combination with rotation and translation of the handle. An extra degree of freedom in the system allows the imaging plane (dependent on orientation) to be directed at an object of interest. A closed form solution for forward and inverse kinematics enables control of the catheter tip position and the imaging plane orientation. The proposed algorithms were validated with a robotic test bed using electromagnetic sensor tracking of the catheter tip. The ability to automatically acquire imaging targets in the heart may improve the efficiency and effectiveness of intracardiac catheter interventions by allowing visualization of soft tissue structures that are not visible using standard fluoroscopic guidance. Although the system has been developed and tested for manipulating ICE catheters, the methods described here are applicable to any long thin tendon-driven tool (with single or bi-directional bending) requiring accurate tip position and orientation control.

  15. SAND: Automated VLBI imaging and analyzing pipeline

    Science.gov (United States)

    Zhang, Ming

    2016-05-01

    The Search And Non-Destroy (SAND) is a VLBI data reduction pipeline composed of a set of Python programs based on the AIPS interface provided by ObitTalk. It is designed for the massive data reduction of multi-epoch VLBI monitoring research. It can automatically investigate calibrated visibility data, search all the radio emissions above a given noise floor and do the model fitting either on the CLEANed image or directly on the uv data. It then digests the model-fitting results, intelligently identifies the multi-epoch jet component correspondence, and recognizes the linear or non-linear proper motion patterns. The outputs including CLEANed image catalogue with polarization maps, animation cube, proper motion fitting and core light curves. For uncalibrated data, a user can easily add inline modules to do the calibration and self-calibration in a batch for a specific array.

  16. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  17. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  18. Automated fetal spine detection in ultrasound images

    Science.gov (United States)

    Tolay, Paresh; Vajinepalli, Pallavi; Bhattacharya, Puranjoy; Firtion, Celine; Sisodia, Rajendra Singh

    2009-02-01

    A novel method is proposed for the automatic detection of fetal spine in ultrasound images along with its orientation in this paper. This problem presents a variety of challenges, including robustness to speckle noise, variations in the visible shape of the spine due to orientation of the ultrasound probe with respect to the fetus and the lack of a proper edge enclosing the entire spine on account of its composition out of distinct vertebra. The proposed method improves robustness and accuracy by making use of two independent techniques to estimate the spine, and then detects the exact location using a cross-correlation approach. Experimental results show that the proposed method is promising for fetal spine detection.

  19. Automated techniques for quality assurance of radiological image modalities

    Science.gov (United States)

    Goodenough, David J.; Atkins, Frank B.; Dyer, Stephen M.

    1991-05-01

    This paper will attempt to identify many of the important issues for quality assurance (QA) of radiological modalities. It is of course to be realized that QA can span many aspects of the diagnostic decision making process. These issues range from physical image performance levels to and through the diagnostic decision of the radiologist. We will use as a model for automated approaches a program we have developed to work with computed tomography (CT) images. In an attempt to unburden the user, and in an effort to facilitate the performance of QA, we have been studying automated approaches. The ultimate utility of the system is its ability to render in a safe and efficacious manner, decisions that are accurate, sensitive, specific and which are possible within the economic constraints of modern health care delivery.

  20. Automated Structure Detection in HRTEM Images: An Example with Graphene

    DEFF Research Database (Denmark)

    Kling, Jens; Vestergaard, Jacob Schack; Dahl, Anders Bjorholm;

    analysis. Single-layer graphene with its regular honeycomb lattice is a perfect model structure to apply automated structure detection. By utilizing Fourier analysis the initial perfect hexagonal structure can easily be recognized. The recorded hexagonal tessellation reflects the unperturbed structure...... challenging to interpret. In order to increase the signal-to-noise ratio of the images two routes can be pursued: 1) the exposure time can be increased; or 2) acquiring series of images and summarize them after alignment. Both methods have the disadvantage of summing images acquired over a certain period...... in the image. The centers of the C-hexagons are displayed as nodes. To segment the image into “pure” and “impure” regions, like areas with residual amorphous contamination or defects e.g. holes, a sliding window approach is used. The magnitude of the Fourier transformation within a window is compared...

  1. AUTOMATED IMAGE MATCHING WITH CODED POINTS IN STEREOVISION MEASUREMENT

    Institute of Scientific and Technical Information of China (English)

    Dong Mingli; Zhou Xiaogang; Zhu Lianqing; Lü Naiguang; Sun Yunan

    2005-01-01

    A coding-based method to solve the image matching problems in stereovision measurement is presented. The solution is to add and append an identity ID to the retro-reflect point, so it can be identified efficiently under the complicated circumstances and has the characteristics of rotation, zooming, and deformation independence. Its design architecture and implementation process in details based on the theory of stereovision measurement are described. The method is effective on reducing processing data time, improving accuracy of image matching and automation of measuring system through experiments.

  2. An automated system for whole microscopic image acquisition and analysis.

    Science.gov (United States)

    Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús

    2014-09-01

    The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented.

  3. Usefulness of automated biopsy guns in image-guided biopsy

    International Nuclear Information System (INIS)

    To evaluate the usefulness of automated biopsy guns in image-guided biopsy of lung, liver, pancreas and other organs. Using automated biopsy devices, 160 biopsies of variable anatomic sites were performed: Biopsies were performed under ultrasonographic(US) guidance in 95 and computed tomographic (CT) guidance in 65. We retrospectively analyzed histologic results and complications. Specimens were adequate for histopathologic diagnosis in 143 of the 160 patients(89.4%)-Diagnostic tissue was obtained in 130 (81.3%), suggestive tissue obtained in 13(8.1%), and non-diagnostic tissue was obtained in 14(8.7%). Inadequate tissue was obtained in only 3(1.9%). There was no statistically significant difference between US-guided and CT-guided percutaneous biopsy. There was no occurrence of significant complication. We have experienced mild complications in only 5 patients-2 hematuria and 2 hematochezia in transrectal prostatic biopsy, and 1 minimal pneumothorax in CT-guided percutaneous lung biopsy. All of them were resolved spontaneously. The image-guided biopsy using the automated biopsy gun was a simple, safe and accurate method of obtaining adequate specimen for the histopathologic diagnosis

  4. Automated retinal image analysis for diabetic retinopathy in telemedicine.

    Science.gov (United States)

    Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S

    2015-03-01

    There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773

  5. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  6. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, Marlene; Rosenvinge, Flemming Schønning; Spillum, Erik;

    2015-01-01

    Background Antibiotics of the β-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results Three E. coli strains displaying...... different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 β-lactam antibiotics or β-lactam–β-lactamase inhibitor combinations were analyzed for their ability to induce...

  7. Automated localization of vertebra landmarks in MRI images

    Science.gov (United States)

    Pai, Akshay; Narasimhamurthy, Anand; Rao, V. S. Veeravasarapu; Vaidya, Vivek

    2011-03-01

    The identification of key landmark points in an MR spine image is an important step for tasks such as vertebra counting. In this paper, we propose a template matching based approach for automatic detection of two key landmark points, namely the second cervical vertebra (C2) and the sacrum from sagittal MR images. The approach is comprised of an approximate localization of vertebral column followed by matching with appropriate templates in order to detect/localize the landmarks. A straightforward extension of the work described here is an automated classification of spine section(s). It also serves as a useful building block for further automatic processing such as extraction of regions of interest for subsequent image processing and also in aiding the counting of vertebra.

  8. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  9. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  10. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    Science.gov (United States)

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  11. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  12. Automated angiogenesis quantification through advanced image processing techniques.

    Science.gov (United States)

    Doukas, Charlampos N; Maglogiannis, Ilias; Chatziioannou, Aristotle; Papapetropoulos, Andreas

    2006-01-01

    Angiogenesis, the formation of blood vessels in tumors, is an interactive process between tumor, endothelial and stromal cells in order to create a network for oxygen and nutrients supply, necessary for tumor growth. According to this, angiogenic activity is considered a suitable method for both tumor growth or inhibition detection. The angiogenic potential is usually estimated by counting the number of blood vessels in particular sections. One of the most popular assay tissues to study the angiogenesis phenomenon is the developing chick embryo and its chorioallantoic membrane (CAM), which is a highly vascular structure lining the inner surface of the egg shell. The aim of this study was to develop and validate an automated image analysis method that would give an unbiased quantification of the micro-vessel density and growth in angiogenic CAM images. The presented method has been validated by comparing automated results to manual counts over a series of digital chick embryo photos. The results indicate the high accuracy of the tool, which has been thus extensively used for tumor growth detection at different stages of embryonic development. PMID:17946107

  13. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  14. Automated segmentation of three-dimensional MR brain images

    Science.gov (United States)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  15. An automated deformable image registration evaluation of confidence tool

    Science.gov (United States)

    Kirby, Neil; Chen, Josephine; Kim, Hojin; Morin, Olivier; Nie, Ke; Pouliot, Jean

    2016-04-01

    Deformable image registration (DIR) is a powerful tool for radiation oncology, but it can produce errors. Beyond this, DIR accuracy is not a fixed quantity and varies on a case-by-case basis. The purpose of this study is to explore the possibility of an automated program to create a patient- and voxel-specific evaluation of DIR accuracy. AUTODIRECT is a software tool that was developed to perform this evaluation for the application of a clinical DIR algorithm to a set of patient images. In brief, AUTODIRECT uses algorithms to generate deformations and applies them to these images (along with processing) to generate sets of test images, with known deformations that are similar to the actual ones and with realistic noise properties. The clinical DIR algorithm is applied to these test image sets (currently 4). From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. In this study, four commercially available DIR algorithms were used to deform a dose distribution associated with a virtual pelvic phantom image set, and AUTODIRECT was used to generate dose uncertainty estimates for each deformation. The virtual phantom image set has a known ground-truth deformation, so the true dose-warping errors of the DIR algorithms were also known. AUTODIRECT predicted error patterns that closely matched the actual error spatial distribution. On average AUTODIRECT overestimated the magnitude of the dose errors, but tuning the AUTODIRECT algorithms should improve agreement. This proof-of-principle test demonstrates the potential for the AUTODIRECT algorithm as an empirical method to predict DIR errors.

  16. Scanning probe image wizard: A toolbox for automated scanning probe microscopy data analysis

    Science.gov (United States)

    Stirling, Julian; Woolley, Richard A. J.; Moriarty, Philip

    2013-11-01

    We describe SPIW (scanning probe image wizard), a new image processing toolbox for SPM (scanning probe microscope) images. SPIW can be used to automate many aspects of SPM data analysis, even for images with surface contamination and step edges present. Specialised routines are available for images with atomic or molecular resolution to improve image visualisation and generate statistical data on surface structure.

  17. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  18. Automated in situ brain imaging for mapping the Drosophila connectome.

    Science.gov (United States)

    Lin, Chi-Wen; Lin, Hsuan-Wen; Chiu, Mei-Tzu; Shih, Yung-Hsin; Wang, Ting-Yuan; Chang, Hsiu-Ming; Chiang, Ann-Shyn

    2015-01-01

    Mapping the connectome, a wiring diagram of the entire brain, requires large-scale imaging of numerous single neurons with diverse morphology. It is a formidable challenge to reassemble these neurons into a virtual brain and correlate their structural networks with neuronal activities, which are measured in different experiments to analyze the informational flow in the brain. Here, we report an in situ brain imaging technique called Fly Head Array Slice Tomography (FHAST), which permits the reconstruction of structural and functional data to generate an integrative connectome in Drosophila. Using FHAST, the head capsules of an array of flies can be opened with a single vibratome sectioning to expose the brains, replacing the painstaking and inconsistent brain dissection process. FHAST can reveal in situ brain neuroanatomy with minimal distortion to neuronal morphology and maintain intact neuronal connections to peripheral sensory organs. Most importantly, it enables the automated 3D imaging of 100 intact fly brains in each experiment. The established head model with in situ brain neuroanatomy allows functional data to be accurately registered and associated with 3D images of single neurons. These integrative data can then be shared, searched, visualized, and analyzed for understanding how brain-wide activities in different neurons within the same circuit function together to control complex behaviors.

  19. Automated image analysis for space debris identification and astrometric measurements

    Science.gov (United States)

    Piattoni, Jacopo; Ceruti, Alessandro; Piergentili, Fabrizio

    2014-10-01

    The space debris is a challenging problem for the human activity in the space. Observation campaigns are conducted around the globe to detect and track uncontrolled space objects. One of the main problems in optical observation is obtaining useful information about the debris dynamical state by the images collected. For orbit determination, the most relevant information embedded in optical observation is the precise angular position, which can be evaluated by astrometry procedures, comparing the stars inside the image with star catalogs. This is typically a time consuming process, if done by a human operator, which makes this task impractical when dealing with large amounts of data, in the order of thousands images per night, generated by routinely conducted observations. An automated procedure is investigated in this paper that is capable to recognize the debris track inside a picture, calculate the celestial coordinates of the image's center and use these information to compute the debris angular position in the sky. This procedure has been implemented in a software code, that does not require human interaction and works without any supplemental information besides the image itself, detecting space objects and solving for their angular position without a priori information. The algorithm for object detection was developed inside the research team. For the star field computation, the software code astrometry.net was used and released under GPL v2 license. The complete procedure was validated by an extensive testing, using the images obtained in the observation campaign performed in a joint project between the Italian Space Agency (ASI) and the University of Bologna at the Broglio Space center, Kenya.

  20. Automated Recognition of 3D Features in GPIR Images

    Science.gov (United States)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  1. Automated detection of open magnetic field regions in EUV images

    Science.gov (United States)

    Krista, Larisza Diana; Reinard, Alysha

    2016-05-01

    Open magnetic regions on the Sun are either long-lived (coronal holes) or transient (dimmings) in nature, but both appear as dark regions in EUV images. For this reason their detection can be done in a similar way. As coronal holes are often large and long-lived in comparison to dimmings, their detection is more straightforward. The Coronal Hole Automated Recognition and Monitoring (CHARM) algorithm detects coronal holes using EUV images and a magnetogram. The EUV images are used to identify dark regions, and the magnetogam allows us to determine if the dark region is unipolar – a characteristic of coronal holes. There is no temporal sensitivity in this process, since coronal hole lifetimes span days to months. Dimming regions, however, emerge and disappear within hours. Hence, the time and location of a dimming emergence need to be known to successfully identify them and distinguish them from regular coronal holes. Currently, the Coronal Dimming Tracker (CoDiT) algorithm is semi-automated – it requires the dimming emergence time and location as an input. With those inputs we can identify the dimming and track it through its lifetime. CoDIT has also been developed to allow the tracking of dimmings that split or merge – a typical feature of dimmings.The advantage of these particular algorithms is their ability to adapt to detecting different types of open field regions. For coronal hole detection, each full-disk solar image is processed individually to determine a threshold for the image, hence, we are not limited to a single pre-determined threshold. For dimming regions we also allow individual thresholds for each dimming, as they can differ substantially. This flexibility is necessary for a subjective analysis of the studied regions. These algorithms were developed with the goal to allow us better understand the processes that give rise to eruptive and non-eruptive open field regions. We aim to study how these regions evolve over time and what environmental

  2. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  3. Automated movement correction for dynamic PET/CT images: Evaluation with phantom and patient data

    OpenAIRE

    Ye, H.; Wong, KP; Wardak, M; Dahlbom, M.; Kepe, V; Barrio, JR; Nelson, LD; Small, GW; Huang, SC

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed th...

  4. Automated Image Retrieval of Chest CT Images Based on Local Grey Scale Invariant Features.

    Science.gov (United States)

    Arrais Porto, Marcelo; Cordeiro d'Ornellas, Marcos

    2015-01-01

    Textual-based tools are regularly employed to retrieve medical images for reading and interpretation using current retrieval Picture Archiving and Communication Systems (PACS) but pose some drawbacks. All-purpose content-based image retrieval (CBIR) systems are limited when dealing with medical images and do not fit well into PACS workflow and clinical practice. This paper presents an automated image retrieval approach for chest CT images based local grey scale invariant features from a local database. Performance was measured in terms of precision and recall, average retrieval precision (ARP), and average retrieval rate (ARR). Preliminary results have shown the effectiveness of the proposed approach. The prototype is also a useful tool for radiology research and education, providing valuable information to the medical and broader healthcare community. PMID:26262345

  5. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  6. Automated Detection of Firearms and Knives in a CCTV Image.

    Science.gov (United States)

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  7. Automated Detection of Firearms and Knives in a CCTV Image

    Directory of Open Access Journals (Sweden)

    Michał Grega

    2016-01-01

    Full Text Available Closed circuit television systems (CCTV are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  8. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  9. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  10. Use of automated image registration to generate mean brain SPECT image of Alzheimer's patients

    International Nuclear Information System (INIS)

    The purpose of this study was to compute and compare the group mean HMPAO brain SPECT images of patients with senile dementia of Alzheimer's type (SDAT) and age matched control subjects after transformation of the individual images to a standard size and shape. Ten patients with Alzheimer's disease (age 71.6±5.0 yr) and ten age matched normal subjects (age 71.0±6.1 yr) participated in this study. Tc-99m HMPAO brain SPECT and X-ray CT scans were acquired for each subject. SPECT images were normalized to an average activity of 100 counts/pixel. Individual brain images were transformed to a standard size and shape with the help of Automated Image Registration (AIR). Realigned brain SPECT images of both groups were used to generate mean and standard deviation images by arithmetic operations on voxel based numerical values. Mean images of both groups were compared by applying the unpaired t-test on a voxel by voxel basis to generate three dimensional T-maps. X-ray CT images of individual subjects were evaluated by means of a computer program for brain atrophy. A significant decrease in relative radioisotope (RI) uptake was present in the bilateral superior and inferior parietal lobules (p<0.05), bilateral inferior temporal gyri, and the bilateral superior and middle frontal gyri (p<0.001). The mean brain atrophy indices for patients and normal subjects were 0.853±0.042 and 0.933±0.017 respectively, the difference being statistically significant (p<0.001). The use of a brain image standardization procedure increases the accuracy of voxel based group comparisons. Thus, intersubject averaging enhances the capacity for detection of abnormalities in functional brain images by minimizing the influence of individual variation. (author)

  11. Imaging Automation and Volume Tomographic Visualization at Texas Neutron Imaging Facility

    International Nuclear Information System (INIS)

    A thermal neutron imaging facility for real-time neutron radiography and computed tomography has been developed at the University of Texas reactor. The facility produced good-quality radiographs and two-dimensional tomograms. Further developments have been recently accomplished. A computer software has been developed to automate and expedite the data acquisition and reconstruction processes. Volume tomographic visualization using Interactive Data Language (IDL) software has been demonstrated and will be further developed. Volume tomography provides the additional flexibility of producing slices of the object using software and thus avoids redoing the measurements

  12. Imaging automation and volume tomographic visualization at Texas Neutron Imaging Facility

    International Nuclear Information System (INIS)

    A thermal neutron imaging facility for real-time neutron radiography and computed tomography has been developed at the University of Texas reactor. The facility produced a good-quality radiographs and two-dimensional tomograms. Further developments have been recently accomplished. Further developments have been recently accomplished. A computer software has been developed to automate and expedite the data acquisition and reconstruction processes. Volume tomographic visualization using Interactive Data Language (IDL) software has been demonstrated and will be further developed. Volume tomography provides the additional flexibility of producing slices of the object using software and thus avoids redoing the measurements

  13. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    Science.gov (United States)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  14. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    International Nuclear Information System (INIS)

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools. (paper)

  15. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    Science.gov (United States)

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  16. AMIsurvey, chimenea and other tools: Automated imaging for transient surveys with existing radio-observatories

    CERN Document Server

    Staley, Tim D

    2015-01-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, making use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. These packages...

  17. Quantization of polyphenolic compounds in histological sections of grape berries by automated color image analysis

    Science.gov (United States)

    Clement, Alain; Vigouroux, Bertnand

    2003-04-01

    We present new results in applied color image analysis that put in evidence the significant influence of soil on localization and appearance of polyphenols in grapes. These results have been obtained with a new unsupervised classification algorithm founded on hierarchical analysis of color histograms. The process is automated thanks to a software platform we developed specifically for color image analysis and it's applications.

  18. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    Science.gov (United States)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  19. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detec...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  20. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    Directory of Open Access Journals (Sweden)

    Mohendra Roy

    2016-05-01

    Full Text Available Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al., we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings.

  1. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology.

    Science.gov (United States)

    Roy, Mohendra; Seo, Dongmin; Oh, Sangwoo; Chae, Yeonghun; Nam, Myung-Hyun; Seo, Sungkyu

    2016-01-01

    Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al.), we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings. PMID:27164146

  2. A novel automated image analysis method for accurate adipocyte quantification

    OpenAIRE

    Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...

  3. Automative Multi Classifier Framework for Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    R. Edbert Rajan

    2015-04-01

    Full Text Available Medical image processing is the technique used to create images of the human body for medical purposes. Nowadays, medical image processing plays a major role and a challenging solution for the critical stage in the medical line. Several researches have done in this area to enhance the techniques for medical image processing. However, due to some demerits met by some advanced technologies, there are still many aspects that need further development. Existing study evaluate the efficacy of the medical image analysis with the level-set shape along with fractal texture and intensity features to discriminate PF (Posterior Fossa tumor from other tissues in the brain image. To develop the medical image analysis and disease diagnosis, to devise an automotive subjective optimality model for segmentation of images based on different sets of selected features from the unsupervised learning model of extracted features. After segmentation, classification of images is done. The classification is processed by adapting the multiple classifier frameworks in the previous work based on the mutual information coefficient of the selected features underwent for image segmentation procedures. In this study, to enhance the classification strategy, we plan to implement enhanced multi classifier framework for the analysis of medical images and disease diagnosis. The performance parameter used for the analysis of the proposed enhanced multi classifier framework for medical image analysis is Multiple Class intensity, image quality, time consumption.

  4. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    Science.gov (United States)

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-03-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.

  5. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. PMID:26894596

  6. Microscopic images dataset for automation of RBCs counting.

    Science.gov (United States)

    Abbas, Sherif

    2015-12-01

    A method for Red Blood Corpuscles (RBCs) counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs) images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  7. Automated analysis of protein subcellular location in time series images

    OpenAIRE

    Hu, Yanhua; Osuna-Highley, Elvira; Hua, Juchang; Nowicki, Theodore Scott; Stolz, Robert; McKayle, Camille; Murphy, Robert F.

    2010-01-01

    Motivation: Image analysis, machine learning and statistical modeling have become well established for the automatic recognition and comparison of the subcellular locations of proteins in microscope images. By using a comprehensive set of features describing static images, major subcellular patterns can be distinguished with near perfect accuracy. We now extend this work to time series images, which contain both spatial and temporal information. The goal is to use temporal features to improve...

  8. Automated quadrilateral mesh generation for digital image structures

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    With the development of advanced imaging technology, digital images are widely used. This paper proposes an automatic quadrilateral mesh generation algorithm for multi-colour imaged structures. It takes an original arbitrary digital image as an input for automatic quadrilateral mesh generation, this includes removing the noise, extracting and smoothing the boundary geometries between different colours, and automatic all-quad mesh generation with the above boundaries as constraints. An application example is...

  9. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data

    International Nuclear Information System (INIS)

    The aim of the study was to evaluate a new method for automated definition of a center lumen line in vessels in cardiovascular image data. This method, called VAMPIRE, is based on improved detection of vessel-like structures. A multiobserver evaluation study was conducted involving 40 tracings in clinical CTA data of carotid arteries to compare VAMPIRE with an established technique. This comparison showed that VAMPIRE yields considerably more successful tracings and improved handling of stenosis, calcifications, multiple vessels, and nearby bone structures. We conclude that VAMPIRE is highly suitable for automated definition of center lumen lines in vessels in cardiovascular image data. (orig.)

  10. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data

    Energy Technology Data Exchange (ETDEWEB)

    Gratama van Andel, Hugo A.F. [Erasmus MC-University Medical Center Rotterdam, Department of Medical Informatics, Rotterdam (Netherlands); Erasmus MC-University Medical Center Rotterdam, Department of Radiology, Rotterdam (Netherlands); Academic Medical Centre-University of Amsterdam, Department of Medical Physics, Amsterdam (Netherlands); Meijering, Erik; Vrooman, Henri A.; Stokking, Rik [Erasmus MC-University Medical Center Rotterdam, Department of Medical Informatics, Rotterdam (Netherlands); Erasmus MC-University Medical Center Rotterdam, Department of Radiology, Rotterdam (Netherlands); Lugt, Aad van der; Monye, Cecile de [Erasmus MC-University Medical Center Rotterdam, Department of Radiology, Rotterdam (Netherlands)

    2006-02-01

    The aim of the study was to evaluate a new method for automated definition of a center lumen line in vessels in cardiovascular image data. This method, called VAMPIRE, is based on improved detection of vessel-like structures. A multiobserver evaluation study was conducted involving 40 tracings in clinical CTA data of carotid arteries to compare VAMPIRE with an established technique. This comparison showed that VAMPIRE yields considerably more successful tracings and improved handling of stenosis, calcifications, multiple vessels, and nearby bone structures. We conclude that VAMPIRE is highly suitable for automated definition of center lumen lines in vessels in cardiovascular image data. (orig.)

  11. Improved automated synthesis and preliminary animal PET/CT imaging of 11C-acetate

    International Nuclear Information System (INIS)

    To study a simple and rapid automated synthetic technology of 11C-acetate (11C- AC), automated synthesis of 11C-AC was performed by carboxylation of MeMgBr/tetrahydrofuran (THF) on a polyethylene loop with 11C-CO2, followed by hydrolysis and purification on solid-phase extraction cartridges using a 11C-Choline/Methionine synthesizer made in China. A high and reproducible radiochemical yield of above 40% (decay corrected) was obtained within the whole synthesis time about 8 min from 11C-CO2. The radiochemical purity of 11C-AC was over 95%. The novel, simple and rapid on-column hydrolysis-purification procedure should adaptable to the fully automated synthesis of 11C-AC at several commercial synthesis module. 11C-AC injection produced by the automated procedure is safe and effective, and can be used for PET imaging of animals and humans. (authors)

  12. A review of automated image understanding within 3D baggage computed tomography security screening.

    Science.gov (United States)

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  13. Automated quantification of budding Saccharomyces cerevisiae using a novel image cytometry method.

    Science.gov (United States)

    Laverty, Daniel J; Kury, Alexandria L; Kuksin, Dmitry; Pirani, Alnoor; Flanagan, Kevin; Chan, Leo Li-Ying

    2013-06-01

    The measurements of concentration, viability, and budding percentages of Saccharomyces cerevisiae are performed on a routine basis in the brewing and biofuel industries. Generation of these parameters is of great importance in a manufacturing setting, where they can aid in the estimation of product quality, quantity, and fermentation time of the manufacturing process. Specifically, budding percentages can be used to estimate the reproduction rate of yeast populations, which directly correlates with metabolism of polysaccharides and bioethanol production, and can be monitored to maximize production of bioethanol during fermentation. The traditional method involves manual counting using a hemacytometer, but this is time-consuming and prone to human error. In this study, we developed a novel automated method for the quantification of yeast budding percentages using Cellometer image cytometry. The automated method utilizes a dual-fluorescent nucleic acid dye to specifically stain live cells for imaging analysis of unique morphological characteristics of budding yeast. In addition, cell cycle analysis is performed as an alternative method for budding analysis. We were able to show comparable yeast budding percentages between manual and automated counting, as well as cell cycle analysis. The automated image cytometry method is used to analyze and characterize corn mash samples directly from fermenters during standard fermentation. Since concentration, viability, and budding percentages can be obtained simultaneously, the automated method can be integrated into the fermentation quality assurance protocol, which may improve the quality and efficiency of beer and bioethanol production processes.

  14. Automated detection of a prostate Ni-Ti stent in electronic portal images

    DEFF Research Database (Denmark)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane;

    2006-01-01

    of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection...... algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study....... Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7  mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67...

  15. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  16. Comparison of semi-automated image analysis and manual methods for tissue quantification in pancreatic carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Sims, A.J. [Regional Medical Physics Department, Freeman Hospital, Newcastle upon Tyne (United Kingdom)]. E-mail: a.j.sims@newcastle.ac.uk; Murray, A. [Regional Medical Physics Department, Freeman Hospital, Newcastle upon Tyne (United Kingdom); Bennett, M.K. [Department of Histopathology, Newcastle upon Tyne Hospitals NHS Trust, Newcastle upon Tyne (United Kingdom)

    2002-04-21

    Objective measurements of tissue area during histological examination of carcinoma can yield valuable prognostic information. However, such measurements are not made routinely because the current manual approach is time consuming and subject to large statistical sampling error. In this paper, a semi-automated image analysis method for measuring tissue area in histological samples is applied to the measurement of stromal tissue, cell cytoplasm and lumen in samples of pancreatic carcinoma and compared with the standard manual point counting method. Histological samples from 26 cases of pancreatic carcinoma were stained using the sirius red, light-green method. Images from each sample were captured using two magnifications. Image segmentation based on colour cluster analysis was used to subdivide each image into representative colours which were classified manually into one of three tissue components. Area measurements made using this technique were compared to corresponding manual measurements and used to establish the comparative accuracy of the semi-automated image analysis technique, with a quality assurance study to measure the repeatability of the new technique. For both magnifications and for each tissue component, the quality assurance study showed that the semi-automated image analysis algorithm had better repeatability than its manual equivalent. No significant bias was detected between the measurement techniques for any of the comparisons made using the 26 cases of pancreatic carcinoma. The ratio of manual to semi-automatic repeatability errors varied from 2.0 to 3.6. Point counting would need to be increased to be between 400 and 1400 points to achieve the same repeatability as for the semi-automated technique. The results demonstrate that semi-automated image analysis is suitable for measuring tissue fractions in histological samples prepared with coloured stains and is a practical alternative to manual point counting. (author)

  17. Comparison of semi-automated image analysis and manual methods for tissue quantification in pancreatic carcinoma

    International Nuclear Information System (INIS)

    Objective measurements of tissue area during histological examination of carcinoma can yield valuable prognostic information. However, such measurements are not made routinely because the current manual approach is time consuming and subject to large statistical sampling error. In this paper, a semi-automated image analysis method for measuring tissue area in histological samples is applied to the measurement of stromal tissue, cell cytoplasm and lumen in samples of pancreatic carcinoma and compared with the standard manual point counting method. Histological samples from 26 cases of pancreatic carcinoma were stained using the sirius red, light-green method. Images from each sample were captured using two magnifications. Image segmentation based on colour cluster analysis was used to subdivide each image into representative colours which were classified manually into one of three tissue components. Area measurements made using this technique were compared to corresponding manual measurements and used to establish the comparative accuracy of the semi-automated image analysis technique, with a quality assurance study to measure the repeatability of the new technique. For both magnifications and for each tissue component, the quality assurance study showed that the semi-automated image analysis algorithm had better repeatability than its manual equivalent. No significant bias was detected between the measurement techniques for any of the comparisons made using the 26 cases of pancreatic carcinoma. The ratio of manual to semi-automatic repeatability errors varied from 2.0 to 3.6. Point counting would need to be increased to be between 400 and 1400 points to achieve the same repeatability as for the semi-automated technique. The results demonstrate that semi-automated image analysis is suitable for measuring tissue fractions in histological samples prepared with coloured stains and is a practical alternative to manual point counting. (author)

  18. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    International Nuclear Information System (INIS)

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  19. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    Netten, van Jaap J.; Baal, van Jeff G.; Liu, Chanjuan; Heijden, van der Ferdi; Bus, Sicco A.

    2013-01-01

    Background: Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the ap

  20. Automated Selection of Uniform Regions for CT Image Quality Detection

    CERN Document Server

    Naeemi, Maitham D; Roychodhury, Sohini

    2016-01-01

    CT images are widely used in pathology detection and follow-up treatment procedures. Accurate identification of pathological features requires diagnostic quality CT images with minimal noise and artifact variation. In this work, a novel Fourier-transform based metric for image quality (IQ) estimation is presented that correlates to additive CT image noise. In the proposed method, two windowed CT image subset regions are analyzed together to identify the extent of variation in the corresponding Fourier-domain spectrum. The two square windows are chosen such that their center pixels coincide and one window is a subset of the other. The Fourier-domain spectral difference between these two sub-sampled windows is then used to isolate spatial regions-of-interest (ROI) with low signal variation (ROI-LV) and high signal variation (ROI-HV), respectively. Finally, the spatial variance ($var$), standard deviation ($std$), coefficient of variance ($cov$) and the fraction of abdominal ROI pixels in ROI-LV ($\

  1. Automated and unbiased image analyses as tools in phenotypic classification of small-spored Alternaria species

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn

    2005-01-01

    often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...

  2. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  3. An image-processing program for automated counting

    Science.gov (United States)

    Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.

    1996-01-01

    An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

  4. ASTRiDE: Automated Streak Detection for Astronomical Images

    Science.gov (United States)

    Kim, Dae-Won

    2016-05-01

    ASTRiDE detects streaks in astronomical images using a "border" of each object (i.e. "boundary-tracing" or "contour-tracing") and their morphological parameters. Fast moving objects such as meteors, satellites, near-Earth objects (NEOs), or even cosmic rays can leave streak-like traces in the images; ASTRiDE can detect not only long streaks but also relatively short or curved streaks.

  5. Automated Drusen Segmentation and Quantification in SD-OCT Images

    OpenAIRE

    Chen, Qiang; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Ma, Jeffrey; de Sisternes, Luis; Rubin, Daniel L.

    2013-01-01

    Spectral domain optical coherence tomography (SD-OCT) is a useful tool for the visualization of drusen, a retinal abnormality seen in patients with age-related macular degeneration (AMD); however, objective assessment of drusen is thwarted by the lack of a method to robustly quantify these lesions on serial OCT images. Here, we describe an automatic drusen segmentation method for SD-OCT retinal images, which leverages a priori knowledge of normal retinal morphology and anatomical features. Th...

  6. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Science.gov (United States)

    Mallard, François; Le Bourlot, Vincent; Tully, Thomas

    2013-01-01

    1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia) to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms. PMID:23734199

  7. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  8. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images

    International Nuclear Information System (INIS)

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods

  9. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  10. Automated Contour Detection for Intravascular Ultrasound Image Sequences Based on Fast Active Contour Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Hai-yan; WANG Hui-nan

    2006-01-01

    Intravascular ultrasound can provide high-resolution real-time crosssectional images about lumen, plaque and tissue. Traditionally, the luminal border and medial-adventitial border are traced manually. This process is extremely timeconsuming and the subjective difference would be large. In this paper, a new automated contour detection method is introduced based on fast active contour model.Experimental results found that lumen and vessel area measurements after automated detection showed good agreement with manual tracings with high correlation coefficients (0.94 and 0.95, respectively) and small system difference ( -0.32 and 0.56, respectively). So it can be a reliable and accurate diagnostic tool.

  11. An Automated Platform for High-Resolution Tissue Imaging Using Nanospray Desorption Electrospray Ionization Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.; Thomas, Mathew; Carson, James P.; Laskin, Julia

    2012-10-02

    An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSI QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.

  12. Image cytometer method for automated assessment of human spermatozoa concentration

    DEFF Research Database (Denmark)

    Egeberg, D L; Kjaerulff, S; Hansen, C;

    2013-01-01

    In the basic clinical work-up of infertile couples, a semen analysis is mandatory and the sperm concentration is one of the most essential variables to be determined. Sperm concentration is usually assessed by manual counting using a haemocytometer and is hence labour intensive and may be subjected...... to investigator bias. Here we show that image cytometry can be used to accurately measure the sperm concentration of human semen samples with great ease and reproducibility. The impact of several factors (pipetting, mixing, round cell content, sperm concentration), which can influence the read-out as well....... Moreover, by evaluation of repeated measurements it appeared that image cytometry produced more consistent and accurate measurements than manual counting of human spermatozoa concentration. In conclusion, image cytometry provides an appealing substitute of manual counting by providing reliable, robust...

  13. Automated Classification of Glaucoma Images by Wavelet Energy Features

    Directory of Open Access Journals (Sweden)

    N.Annu

    2013-04-01

    Full Text Available Glaucoma is the second leading cause of blindness worldwide. As glaucoma progresses, more optic nerve tissue is lost and the optic cup grows which leads to vision loss. This paper compiles a systemthat could be used by non-experts to filtrate cases of patients not affected by the disease. This work proposes glaucomatous image classification using texture features within images and efficient glaucoma classification based on Probabilistic Neural Network (PNN. Energy distribution over wavelet sub bands is applied to compute these texture features. Wavelet features were obtained from the daubechies (db3, symlets (sym3, and biorthogonal (bio3.3, bio3.5, and bio3.7 wavelet filters. It uses a technique to extract energy signatures obtained using 2-D discrete wavelet transform and the energy obtained from the detailed coefficients can be used to distinguish between normal and glaucomatous images. We observedan accuracy of around 95%, this demonstrates the effectiveness of these methods.

  14. Automated detection of meteors in observed image sequence

    Science.gov (United States)

    Šimberová, Stanislava; Suk, Tomáš

    2015-12-01

    We propose a new detection technique based on statistical characteristics of images in the video sequence. These characteristics displayed in time enable to catch any bright track during the whole sequence. We applied our method to the image datacubes that are created from camera pictures of the night sky. Meteor flying through the Earth's atmosphere leaves a light trail lasting a few seconds on the sky background. We developed a special technique to recognize this event automatically in the complete observed video sequence. For further analysis leading to the precise recognition of object we suggest to apply Fourier and Hough transformations.

  15. Automated cell colony counting and analysis using the circular Hough image transform algorithm (CHiTA)

    Energy Technology Data Exchange (ETDEWEB)

    Bewes, J M; Suchowerska, N; McKenzie, D R [School of Physics, University of Sydney, Sydney, NSW (Australia)], E-mail: jbewes@physics.usyd.edu.au

    2008-11-07

    We present an automated cell colony counting method that is flexible, robust and capable of providing more in-depth clonogenic analysis than existing manual and automated approaches. The full form of the Hough transform without approximation has been implemented, for the first time. Improvements in computing speed have facilitated this approach. Colony identification was achieved by pre-processing the raw images of the colonies in situ in the flask, including images of the flask edges, by erosion, dilation and Gaussian smoothing processes. Colony edges were then identified by intensity gradient field discrimination. Our technique eliminates the need for specialized hardware for image capture and enables the use of a standard desktop scanner for distortion-free image acquisition. Additional parameters evaluated included regional colony counts, average colony area, nearest neighbour distances and radial distribution. This spatial and qualitative information extends the utility of the clonogenic assay, allowing analysis of spatially-variant cytotoxic effects. To test the automated system, two flask types and three cell lines with different morphology, cell size and plating density were examined. A novel Monte Carlo method of simulating cell colony images, as well as manual counting, were used to quantify algorithm accuracy. The method was able to identify colonies with unusual morphology, to successfully resolve merged colonies and to correctly count colonies adjacent to flask edges.

  16. Automated cell colony counting and analysis using the circular Hough image transform algorithm (CHiTA)

    Science.gov (United States)

    Bewes, J. M.; Suchowerska, N.; McKenzie, D. R.

    2008-11-01

    We present an automated cell colony counting method that is flexible, robust and capable of providing more in-depth clonogenic analysis than existing manual and automated approaches. The full form of the Hough transform without approximation has been implemented, for the first time. Improvements in computing speed have facilitated this approach. Colony identification was achieved by pre-processing the raw images of the colonies in situ in the flask, including images of the flask edges, by erosion, dilation and Gaussian smoothing processes. Colony edges were then identified by intensity gradient field discrimination. Our technique eliminates the need for specialized hardware for image capture and enables the use of a standard desktop scanner for distortion-free image acquisition. Additional parameters evaluated included regional colony counts, average colony area, nearest neighbour distances and radial distribution. This spatial and qualitative information extends the utility of the clonogenic assay, allowing analysis of spatially-variant cytotoxic effects. To test the automated system, two flask types and three cell lines with different morphology, cell size and plating density were examined. A novel Monte Carlo method of simulating cell colony images, as well as manual counting, were used to quantify algorithm accuracy. The method was able to identify colonies with unusual morphology, to successfully resolve merged colonies and to correctly count colonies adjacent to flask edges.

  17. Automation of the method gamma of comparison dosimetry images

    International Nuclear Information System (INIS)

    The objective of this work was the development of JJGAMMA application analysis software, which enables this task systematically, minimizing intervention specialist and therefore the variability due to the observer. Both benefits, allow comparison of images is done in practice with the required frequency and objectivity. (Author)

  18. Automated identification of retained surgical items in radiological images

    Science.gov (United States)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  19. Computer-assisted tree taxonomy by automated image recognition

    NARCIS (Netherlands)

    Pauwels, E.J.; Zeeuw, P.M.de; Ranguelova, E.B.

    2009-01-01

    We present an algorithm that performs image-based queries within the domain of tree taxonomy. As such, it serves as an example relevant to many other potential applications within the field of biodiversity and photo-identification. Unsupervised matching results are produced through a chain of comput

  20. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo;

    2015-01-01

    Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents...

  1. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    Science.gov (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  2. AUTOMATED VIDEO IMAGE MORPHOMETRY OF THE CORNEAL ENDOTHELIUM

    NARCIS (Netherlands)

    SIERTSEMA, JV; LANDESZ, M; VANDENBROM, H; VANRIJ, G

    1993-01-01

    The central corneal endothelium of 13 eyes in 13 subjects was visualized with a non-contact specular microscope. This report describes the computer-assisted morphometric analysis of enhanced digitized images, using a direct input by means of a frame grabber. The output consisted of mean cell area, c

  3. Automated Detection of Contaminated Radar Image Pixels in Mountain Areas

    Institute of Scientific and Technical Information of China (English)

    LIU Liping; Qin XU; Pengfei ZHANG; Shun LIU

    2008-01-01

    In mountain areas,radar observations are often contaminated(1)by echoes from high-speed moving vehicles and(2)by point-wise ground clutter under either normal propagation(NP)or anomalous propa-gation(AP)conditions.Level II data are collected from KMTX(Salt Lake City,Utah)radar to analyze these two types of contamination in the mountain area around the Great Salt Lake.Human experts provide the"ground truth"for possible contamination of either type on each individual pixel.Common features are then extracted for contaminated pixels of each type.For example,pixels contaminated by echoes from high-speed moving vehicles are characterized by large radial velocity and spectrum width.Echoes from a moving train tend to have larger velocity and reflectivity but smaller spectrum width than those from moving vehicles on highways.These contaminated pixels are only seen in areas of large terrain gradient(in the radial direction along the radar beam).The same is true for the second type of contamination-point-wise ground clutters.Six quality control(QC)parameters are selected to quantify the extracted features.Histograms are computed for each QC parameter and grouped for contaminated pixels of each type and also for non-contaminated pixels.Based on the computed histograms,a fuzzy logical algorithm is developed for automated detection of contaminated pixels.The algorithm is tested with KMTX radar data under different(clear and rainy)weather conditions.

  4. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, E. [Fraunhofer Institute for Production Systems and Design Technology (IPK), Berlin (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Fast, M.F.; Nill, S. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; Oelfke, U. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; German Cancer Research Center (DKFZ), Heidelberg (Germany)

    2015-09-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso {sup registered} beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  5. Automated and Accurate Detection of Soma Location and Surface Morphology in Large-Scale 3D Neuron Images

    OpenAIRE

    Cheng Yan; Anan Li; Bin Zhang,; Wenxiang Ding; Qingming Luo; Hui Gong

    2013-01-01

    Automated and accurate localization and morphometry of somas in 3D neuron images is essential for quantitative studies of neural networks in the brain. However, previous methods are limited in obtaining the location and surface morphology of somas with variable size and uneven staining in large-scale 3D neuron images. In this work, we proposed a method for automated soma locating in large-scale 3D neuron images that contain relatively sparse soma distributions. This method involves three step...

  6. Automated segmentation of pigmented skin lesions in multispectral imaging

    International Nuclear Information System (INIS)

    The aim of this study was to develop an algorithm for the automatic segmentation of multispectral images of pigmented skin lesions. The study involved 1700 patients with 1856 cutaneous pigmented lesions, which were analysed in vivo by a novel spectrophotometric system, before excision. The system is able to acquire a set of 15 different multispectral images at equally spaced wavelengths between 483 and 951 nm. An original segmentation algorithm was developed and applied to the whole set of lesions and was able to automatically contour them all. The obtained lesion boundaries were shown to two expert clinicians, who, independently, rejected 54 of them. The 97.1% contour accuracy indicates that the developed algorithm could be a helpful and effective instrument for the automatic segmentation of skin pigmented lesions. (note)

  7. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik;

    2007-01-01

    cancer. METHODS: A total of 87 patients who underwent PET/CT examinations due to suspected lung cancer comprised the training group. The test group consisted of PET/CT images from 49 patients suspected with lung cancer. The consensus interpretations by two experienced physicians were used as the 'gold...... for localization of lesions in the PET images in the feature extraction process. Eight features from each examination were used as inputs to artificial neural networks trained to classify the images. Thereafter, the performance of the network was evaluated in the test set. RESULTS: The performance of the automated...

  8. Extending and applying active appearance models for automated, high precision segmentation in different image modalities

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Fisker, Rune; Ersbøll, Bjarne Kjær

    2001-01-01

    , an initialization scheme is designed thus making the usage of AAMs fully automated. Using these extensions it is demonstrated that AAMs can segment bone structures in radiographs, pork chops in perspective images and the left ventricle in cardiovascular magnetic resonance images in a robust, fast and accurate...... object class description, which can be employed to rapidly search images for new object instances. The proposed extensions concern enhanced shape representation, handling of homogeneous and heterogeneous textures, refinement optimization using Simulated Annealing and robust statistics. Finally...

  9. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    Science.gov (United States)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  10. Automation of disbond detection in aircraft fuselage through thermal image processing

    Science.gov (United States)

    Prabhu, D. R.; Winfree, W. P.

    1992-01-01

    A procedure for interpreting thermal images obtained during the nondestructive evaluation of aircraft bonded joints is presented. The procedure operates on time-derivative thermal images and resulted in a disbond image with disbonds highlighted. The size of the 'black clusters' in the output disbond image is a quantitative measure of disbond size. The procedure is illustrated using simulation data as well as data obtained through experimental testing of fabricated samples and aircraft panels. Good results are obtained, and, except in pathological cases, 'false calls' in the cases studied appeared only as noise in the output disbond image which was easily filtered out. The thermal detection technique coupled with an automated image interpretation capability will be a very fast and effective method for inspecting bonded joints in an aircraft structure.

  11. Extended Field Laser Confocal Microscopy (EFLCM: Combining automated Gigapixel image capture with in silico virtual microscopy

    Directory of Open Access Journals (Sweden)

    Strandh Christer

    2008-07-01

    Full Text Available Abstract Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM. Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA instrument for automated screening processes.

  12. Automated detection of diabetic retinopathy in retinal images

    Directory of Open Access Journals (Sweden)

    Carmen Valverde

    2016-01-01

    Full Text Available Diabetic retinopathy (DR is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  13. Automated Peripheral Neuropathy Assessment Using Optical Imaging and Foot Anthropometry.

    Science.gov (United States)

    Siddiqui, Hafeez-U R; Spruce, Michelle; Alty, Stephen R; Dudley, Sandra

    2015-08-01

    A large proportion of individuals who live with type-2 diabetes suffer from plantar sensory neuropathy. Regular testing and assessment for the condition is required to avoid ulceration or other damage to patient's feet. Currently accepted practice involves a trained clinician testing a patient's feet manually with a hand-held nylon monofilament probe. The procedure is time consuming, labor intensive, requires special training, is prone to error, and repeatability is difficult. With the vast increase in type-2 diabetes, the number of plantar sensory neuropathy sufferers has already grown to such an extent as to make a traditional manual test problematic. This paper presents the first investigation of a novel approach to automatically identify the pressure points on a given patient's foot for the examination of sensory neuropathy via optical image processing incorporating plantar anthropometry. The method automatically selects suitable test points on the plantar surface that correspond to those repeatedly chosen by a trained podiatrist. The proposed system automatically identifies the specific pressure points at different locations, namely the toe (hallux), metatarsal heads and heel (Calcaneum) areas. The approach is generic and has shown 100% reliability on the available database used. The database consists of Chinese, Asian, African, and Caucasian foot images. PMID:26186748

  14. Automated Image-Based Procedures for Adaptive Radiotherapy

    DEFF Research Database (Denmark)

    Bjerre, Troels

    Fractionated radiotherapy for cancer treatment is a field of constant innovation. Developments in dose delivery techniques have made it possible to precisely direct ionizing radiation at complicated targets. In order to further increase tumour control probability (TCP) and decrease normal...... to encourage bone rigidity and local tissue volume change only in the gross tumour volume and the lungs. This is highly relevant in adaptive radiotherapy when modelling significant tumour volume changes. - It is described how cone beam CT reconstruction can be modelled as a deformation of a planning CT scan...... be employed for contour propagation in adaptive radiotherapy. - MRI-radiotherapy devices have the potential to offer near real-time intrafraction imaging without any additional ionising radiation. It is detailed how the use of multiple, orthogonal slices can form the basis for reliable 3D soft tissue tracking....

  15. Automated grading of renal cell carcinoma using whole slide imaging

    Directory of Open Access Journals (Sweden)

    Fang-Cheng Yeh

    2014-01-01

    Full Text Available Introduction: Recent technology developments have demonstrated the benefit of using whole slide imaging (WSI in computer-aided diagnosis. In this paper, we explore the feasibility of using automatic WSI analysis to assist grading of clear cell renal cell carcinoma (RCC, which is a manual task traditionally performed by pathologists. Materials and Methods: Automatic WSI analysis was applied to 39 hematoxylin and eosin-stained digitized slides of clear cell RCC with varying grades. Kernel regression was used to estimate the spatial distribution of nuclear size across the entire slides. The analysis results were correlated with Fuhrman nuclear grades determined by pathologists. Results: The spatial distribution of nuclear size provided a panoramic view of the tissue sections. The distribution images facilitated locating regions of interest, such as high-grade regions and areas with necrosis. The statistical analysis showed that the maximum nuclear size was significantly different (P < 0.001 between low-grade (Grades I and II and high-grade tumors (Grades III and IV. The receiver operating characteristics analysis showed that the maximum nuclear size distinguished high-grade and low-grade tumors with a false positive rate of 0.2 and a true positive rate of 1.0. The area under the curve is 0.97. Conclusion: The automatic WSI analysis allows pathologists to see the spatial distribution of nuclei size inside the tumors. The maximum nuclear size can also be used to differentiate low-grade and high-grade clear cell RCC with good sensitivity and specificity. These data suggest that automatic WSI analysis may facilitate pathologic grading of renal tumors and reduce variability encountered with manual grading.

  16. Automated semantic indexing of imaging reports to support retrieval of medical images in the multimedia electronic medical record.

    Science.gov (United States)

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A; Mailhot, M

    1999-12-01

    This paper describes preliminary work evaluating automated semantic indexing of radiology imaging reports to represent images stored in the Image Engine multimedia medical record system at the University of Pittsburgh Medical Center. The authors used the SAPHIRE indexing system to automatically identify important biomedical concepts within radiology reports and represent these concepts with terms from the 1998 edition of the U.S. National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. This automated UMLS indexing was then compared with manual UMLS indexing of the same reports. Human indexing identified appropriate UMLS Metathesaurus descriptors for 81% of the important biomedical concepts contained in the report set. SAPHIRE automatically identified UMLS Metathesaurus descriptors for 64% of the important biomedical concepts contained in the report set. The overall conclusions of this pilot study were that the UMLS metathesaurus provided adequate coverage of the majority of the important concepts contained within the radiology report test set and that SAPHIRE could automatically identify and translate almost two thirds of these concepts into appropriate UMLS descriptors. Further work is required to improve both the recall and precision of this automated concept extraction process. PMID:10805018

  17. Automated semantic indexing of imaging reports to support retrieval of medical images in the multimedia electronic medical record.

    Science.gov (United States)

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A; Mailhot, M

    1999-12-01

    This paper describes preliminary work evaluating automated semantic indexing of radiology imaging reports to represent images stored in the Image Engine multimedia medical record system at the University of Pittsburgh Medical Center. The authors used the SAPHIRE indexing system to automatically identify important biomedical concepts within radiology reports and represent these concepts with terms from the 1998 edition of the U.S. National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. This automated UMLS indexing was then compared with manual UMLS indexing of the same reports. Human indexing identified appropriate UMLS Metathesaurus descriptors for 81% of the important biomedical concepts contained in the report set. SAPHIRE automatically identified UMLS Metathesaurus descriptors for 64% of the important biomedical concepts contained in the report set. The overall conclusions of this pilot study were that the UMLS metathesaurus provided adequate coverage of the majority of the important concepts contained within the radiology report test set and that SAPHIRE could automatically identify and translate almost two thirds of these concepts into appropriate UMLS descriptors. Further work is required to improve both the recall and precision of this automated concept extraction process.

  18. Automated Formosat Image Processing System for Rapid Response to International Disasters

    Science.gov (United States)

    Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.

    2016-06-01

    FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  19. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  20. Development of a methodology for automated assessment of the quality of digitized images in mammography

    International Nuclear Information System (INIS)

    The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. The purpose of this study is to develop a computational methodology to automate the process of assessing the quality of mammography images through techniques of digital imaging processing (PDI), using an existing image processing environment (ImageJ). With the application of PDI techniques was possible to extract geometric and radiometric characteristics of the images evaluated. The evaluated parameters include spatial resolution, high-contrast detail, low contrast threshold, linear detail of low contrast, tumor masses, contrast ratio and background optical density. The results obtained by this method were compared with the results presented in the visual evaluations performed by the Health Surveillance of Minas Gerais. Through this comparison was possible to demonstrate that the automated methodology is presented as a promising alternative for the reduction or elimination of existing subjectivity in the visual assessment methodology currently in use. (author)

  1. An Automated System for the Detection of Stratified Squamous Epithelial Cancer Cell Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Ram Krishna Kumar

    2013-06-01

    Full Text Available Early detection of cancer disease is a difficult problem and if it is not detected in starting phase the cancer can be fatal. Current medical procedures which are used to diagnose the cancer in body partsare time taking and more laboratory work is required for them. This work is an endeavor to possible recognition of cancer cells in the body part. The process consists of image taken of the affected area and digital image processing of the images to get a morphological pattern which differentiate normal cell to cancer cell. The technique is different than visual inspection and biopsy process. Image processing enables the visualization of cellular structure with substantial resolution. The aim of the work is to exploit differences in cellular organization between cancerous and normal tissue using image processing technique, thus allowing for automated, fast and accurate diagnosis.

  2. RootGraph: a graphic optimization tool for automated image analysis of plant roots.

    Science.gov (United States)

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J

    2015-11-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions.

  3. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    Science.gov (United States)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  4. Automated construction of arterial and venous trees in retinal images.

    Science.gov (United States)

    Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K

    2015-10-01

    While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input. PMID:26636114

  5. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  6. Automated measurement of parameters related to the deformities of lower limbs based on x-rays images.

    Science.gov (United States)

    Wojciechowski, Wadim; Molka, Adrian; Tabor, Zbisław

    2016-03-01

    Measurement of the deformation of the lower limbs in the current standard full-limb X-rays images presents significant challenges to radiologists and orthopedists. The precision of these measurements is deteriorated because of inexact positioning of the leg during image acquisition, problems with selecting reliable anatomical landmarks in projective X-ray images, and inevitable errors of manual measurements. The influence of the random errors resulting from the last two factors on the precision of the measurement can be reduced if an automated measurement method is used instead of a manual one. In the paper a framework for an automated measurement of various metric and angular quantities used in the description of the lower extremity deformation in full-limb frontal X-ray images is described. The results of automated measurements are compared with manual measurements. These results demonstrate that an automated method can be a valuable alternative to the manual measurements.

  7. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  8. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    Science.gov (United States)

    Singh, Preetpal

    Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing

  9. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  10. Automation of Axisymmetric Drop Shape Analysis Using Digital Image Processing

    Science.gov (United States)

    Cheng, Philip Wing Ping

    The Axisymmetric Drop Shape Analysis - Profile (ADSA-P) technique, as initiated by Rotenberg, is a user -oriented scheme to determine liquid-fluid interfacial tensions and contact angles from the shape of axisymmetric menisci, i.e., from sessile as well as pendant drops. The ADSA -P program requires as input several coordinate points along the drop profile, the value of the density difference between the bulk phases, and gravity. The solution yields interfacial tension and contact angle. Although the ADSA-P technique was in principle complete, it was found that it was of very limited practical use. The major difficulty with the method is the need for very precise coordinate points along the drop profile, which, up to now, could not be obtained readily. In the past, the coordinate points along the drop profile were obtained by manual digitization of photographs or negatives. From manual digitization data, the surface tension values obtained had an average error of +/-5% when compared with literature values. Another problem with the ADSA-P technique was that the computer program failed to converge for the case of very elongated pendant drops. To acquire the drop profile coordinates automatically, a technique which utilizes recent developments in digital image acquisition and analysis was developed. In order to determine the drop profile coordinates as precisely as possible, the errors due to optical distortions were eliminated. In addition, determination of drop profile coordinates to pixel and sub-pixel resolution was developed. It was found that high precision could be obtained through the use of sub-pixel resolution and a spline fitting method. The results obtained using the automatic digitization technique in conjunction with ADSA-P not only compared well with the conventional methods, but also outstripped the precision of conventional methods considerably. To solve the convergence problem of very elongated pendant drops, it was found that the reason for the

  11. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    Science.gov (United States)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  12. Sfm_georef: Automating image measurement of ground control points for SfM-based projects

    Science.gov (United States)

    James, Mike R.

    2016-04-01

    Deriving accurate DEM and orthomosaic image products from UAV surveys generally involves the use of multiple ground control points (GCPs). Here, we demonstrate the automated collection of GCP image measurements for SfM-MVS processed projects, using sfm_georef software (James & Robson, 2012; http://www.lancaster.ac.uk/staff/jamesm/software/sfm_georef.htm). Sfm_georef was originally written to provide geo-referencing procedures for SfM-MVS projects. It has now been upgraded with a 3-D patch-based matching routine suitable for automating GCP image measurement in both aerial and ground-based (oblique) projects, with the aim of reducing the time required for accurate geo-referencing. Sfm_georef is compatible with a range of SfM-MVS software and imports the relevant files that describe the image network, including camera models and tie points. 3-D survey measurements of ground control are then provided, either for natural features or artificial targets distributed over the project area. Automated GCP image measurement is manually initiated through identifying a GCP position in an image by mouse click; the GCP is then represented by a square planar patch in 3-D, textured from the image and oriented parallel to the local topographic surface (as defined by the 3-D positions of nearby tie points). Other images are then automatically examined by projecting the patch into the images (to account for differences in viewing geometry) and carrying out a sub-pixel normalised cross-correlation search in the local area. With two or more observations of a GCP, its 3-D co-ordinates are then derived by ray intersection. With the 3-D positions of three or more GCPs identified, an initial geo-referencing transform can be derived to relate the SfM-MVS co-ordinate system to that of the GCPs. Then, if GCPs are symmetric and identical, image texture from one representative GCP can be used to search automatically for all others throughout the image set. Finally, the GCP observations can be

  13. ATOM - an OMERO add-on for automated import of image data

    Directory of Open Access Journals (Sweden)

    Lipp Peter

    2011-10-01

    Full Text Available Abstract Background Modern microscope platforms are able to generate multiple gigabytes of image data in a single experimental session. In a routine research laboratory workflow, these data are initially stored on the local acquisition computer from which files need to be transferred to the experimenter's (remote image repository (e.g., DVDs, portable hard discs or server-based storage because of limited local data storage. Although manual solutions for this migration, such as OMERO - a client-server software for visualising and managing large amounts of image data - exist, this import process may be a time-consuming and tedious task. Findings We have developed ATOM, a Java-based and thus platform-independent add-on for OMERO enabling automated transfer of image data from a wide variety of acquisition software packages into OMERO. ATOM provides a graphical user interface and allows pre-organisation of experimental data for the transfer. Conclusions ATOM is a convenient extension of the OMERO software system. An automated interface to OMERO will be a useful tool for scientists working with file formats supported by the Bio-Formats file format library, a platform-independent library for reading the most common file formats of microscope images.

  14. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.

    Science.gov (United States)

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  15. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets

    Directory of Open Access Journals (Sweden)

    Charita Bhikha

    2015-01-01

    Full Text Available An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  16. Automated quantification technology for cerebrospinal fluid dynamics based on magnetic resonance image analysis

    International Nuclear Information System (INIS)

    Time-spatial labeling inversion pulse (Time-SLIP) technology, which is a non-contrast-enhanced magnetic resonance imaging (MRI) technology for the visualization of blood flow and cerebrospinal fluid (CSF) dynamics, is used for diagnosis of neurological diseases related to CSF including idiopathic normal-pressure hydrocephalus (iNPH), one of the causes of dementia. However, physicians must subjectively evaluate the velocity of CSF dynamics through observation of Time-SLIP images because no quantification technology exists that can express the values numerically. To address this issue, Toshiba, in cooperation with Toshiba Medical Systems Corporation and Toshiba Rinkan Hospital, has developed an automated quantification technology for CSF dynamics utilizing MR image analysis. We have confirmed the effectiveness of this technology through verification tests using a water phantom and quantification experiments using images of healthy volunteers. (author)

  17. Automated classification of optical coherence tomography images of human atrial tissue.

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B; Marboe, Charles C; Hendon, Christine P

    2016-10-01

    Tissue composition of the atria plays a critical role in the pathology of cardiovascular disease, tissue remodeling, and arrhythmogenic substrates. Optical coherence tomography (OCT) has the ability to capture the tissue composition information of the human atria. In this study, we developed a region-based automated method to classify tissue compositions within human atria samples within OCT images. We segmented regional information without prior information about the tissue architecture and subsequently extracted features within each segmented region. A relevance vector machine model was used to perform automated classification. Segmentation of human atrial ex vivo datasets was correlated with trichrome histology and our classification algorithm had an average accuracy of 80.41% for identifying adipose, myocardium, fibrotic myocardium, and collagen tissue compositions. PMID:26926869

  18. Automated Line Tracking of lambda-DNA for Single-Molecule Imaging

    CERN Document Server

    Guan, Juan; Granick, Steve

    2011-01-01

    We describe a straightforward, automated line tracking method to visualize within optical resolution the contour of linear macromolecules as they rearrange shape as a function of time by Brownian diffusion and under external fields such as electrophoresis. Three sequential stages of analysis underpin this method: first, "feature finding" to discriminate signal from noise; second, "line tracking" to approximate those shapes as lines; third, "temporal consistency check" to discriminate reasonable from unreasonable fitted conformations in the time domain. The automated nature of this data analysis makes it straightforward to accumulate vast quantities of data while excluding the unreliable parts of it. We implement the analysis on fluorescence images of lambda-DNA molecules in agarose gel to demonstrate its capability to produce large datasets for subsequent statistical analysis.

  19. Estimation of urinary stone composition by automated processing of CT images

    CERN Document Server

    Chevreau, Grégoire; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre; 10.1007/s00240-009-0195-3

    2009-01-01

    The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminat...

  20. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  1. An Automated Images-to-Graphs Framework for High Resolution Connectomics

    Directory of Open Access Journals (Sweden)

    William R Gray Roncal

    2015-08-01

    Full Text Available Reconstructing a map of neuronal connectivity is a critical challenge in contemporary neuroscience. Recent advances in high-throughput serial section electron microscopy (EM have produced massive 3D image volumes of nanoscale brain tissue for the first time. The resolution of EM allows for individual neurons and their synaptic connections to be directly observed. Recovering neuronal networks by manually tracing each neuronal process at this scale is unmanageable, and therefore researchers are developing automated image processing modules. Thus far, state-of-the-art algorithms focus only on the solution to a particular task (e.g., neuron segmentation or synapse identification. In this manuscript we present the first fully automated images-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction. To evaluate overall performance and select the best parameters and methods, we also develop a metric to assess the quality of the output graphs. We evaluate a set of algorithms and parameters, searching possible operating points to identify the best available brain graph for our assessment metric. Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set. This provides a baseline result and framework for community analysis and future algorithm development and testing. All code and data derivatives have been made publicly available toward eventually unlocking new biofidelic computational primitives and understanding of neuropathologies.

  2. MAGNETIC RESONANCE IMAGING COMPATIBLE ROBOTIC SYSTEM FOR FULLY AUTOMATED BRACHYTHERAPY SEED PLACEMENT

    Science.gov (United States)

    Muntener, Michael; Patriciu, Alexandru; Petrisor, Doru; Mazilu, Dumitru; Bagga, Herman; Kavoussi, Louis; Cleary, Kevin; Stoianovici, Dan

    2011-01-01

    Objectives To introduce the development of the first magnetic resonance imaging (MRI)-compatible robotic system capable of automated brachytherapy seed placement. Methods An MRI-compatible robotic system was conceptualized and manufactured. The entire robot was built of nonmagnetic and dielectric materials. The key technology of the system is a unique pneumatic motor that was specifically developed for this application. Various preclinical experiments were performed to test the robot for precision and imager compatibility. Results The robot was fully operational within all closed-bore MRI scanners. Compatibility tests in scanners of up to 7 Tesla field intensity showed no interference of the robot with the imager. Precision tests in tissue mockups yielded a mean seed placement error of 0.72 ± 0.36 mm. Conclusions The robotic system is fully MRI compatible. The new technology allows for automated and highly accurate operation within MRI scanners and does not deteriorate the MRI quality. We believe that this robot may become a useful instrument for image-guided prostate interventions. PMID:17169653

  3. AUTOMATED DETECTION OF OIL DEPOTS FROM HIGH RESOLUTION IMAGES: A NEW PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    A. O. Ok

    2015-03-01

    Full Text Available This paper presents an original approach to identify oil depots from single high resolution aerial/satellite images in an automated manner. The new approach considers the symmetric nature of circular oil depots, and it computes the radial symmetry in a unique way. An automated thresholding method to focus on circular regions and a new measure to verify circles are proposed. Experiments are performed on six GeoEye-1 test images. Besides, we perform tests on 16 Google Earth images of an industrial test site acquired in a time series manner (between the years 1995 and 2012. The results reveal that our approach is capable of detecting circle objects in very different/difficult images. We computed an overall performance of 95.8% for the GeoEye-1 dataset. The time series investigation reveals that our approach is robust enough to locate oil depots in industrial environments under varying illumination and environmental conditions. The overall performance is computed as 89.4% for the Google Earth dataset, and this result secures the success of our approach compared to a state-of-the-art approach.

  4. Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images

    Science.gov (United States)

    Jiang, Luan; Ling, Shan; Li, Qiang

    2016-03-01

    Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.

  5. Automated Adaptive Brightness in Wireless Capsule Endoscopy Using Image Segmentation and Sigmoid Function.

    Science.gov (United States)

    Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A

    2016-08-01

    Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule. PMID:27333609

  6. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  7. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  8. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Ani eEloyan

    2012-08-01

    Full Text Available Successful automated diagnoses of attention deficit hyperactive disorder (ADHD using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions, CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  9. Computer-assisted scheme for automated determination of imaging planes in cervical spinal cord MRI

    Science.gov (United States)

    Tsurumaki, Masaki; Tsai, Du-Yih; Lee, Yongbum; Sekiya, Masaru; Kazama, Kiyoko

    2009-02-01

    This paper presents a computerized scheme to assist MRI operators in accurate and rapid determination of sagittal sections for MRI exam of cervical spinal cord. The algorithm of the proposed scheme consisted of 6 steps: (1) extraction of a cervical vertebra containing spinal cord from an axial localizer image; (2) extraction of spinal cord with sagittal image from the extracted vertebra; (3) selection of a series of coronal localizer images corresponding to various, involved portions of the extracted spinal cord with sagittal image; (4) generation of a composite coronal-plane image from the obtained coronal images; (5) extraction of spinal cord from the obtained composite image; (6) determination of oblique sagittal sections from the detected location and gradient of the extracted spinal cord. Cervical spine images obtained from 25 healthy volunteers were used for the study. A perceptual evaluation was performed by five experienced MRI operators. Good agreement between the automated and manual determinations was achieved. By use of the proposed scheme, average execution time was reduced from 39 seconds/case to 1 second/case. The results demonstrate that the proposed scheme can assist MRI operators in performing cervical spinal cord MRI exam accurately and rapidly.

  10. Newly found pulmonary pathophysiology from automated breath-hold perfusion-SPECT-CT fusion image

    International Nuclear Information System (INIS)

    Pulmonary perfusion single photon emission computed tomography (SPECT)-CT fusion image largely contributes to objective and detailed correlation between lung morphologic and perfusion impairment in various lung diseases. However, traditional perfusion SPECT obtained during rest breathing usually shows a significant mis-registration on fusion image with conventional CT obtained during deep-inspiratory phase. There are also other adverse effects caused by respiratory lung motion such as blurring or smearing of small perfusion defects. To resolve these disadvantages of traditional perfusion SPECT, an innovative method of deep-inspiratory breath-hold (DIBrH) SPECT scan is developed in the Nuclear Medicine Institute of Yamaguchi University Hospital. This review article briefly describes the new findings of pulmonary pathophysiology which has been reveled by detailed lung morphologic-perfusion correlation on automated reliable DIBrH perfusion SPECT-CT fusion image. (author)

  11. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya

    DEFF Research Database (Denmark)

    Juul Bøgelund Hansen, Morten; Abramoff, M. D.; Folk, J. C.;

    2015-01-01

    Objective Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased...... workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields...... gave an AUC of 0.878 (95% CI 0.850-0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment. Conclusions In this epidemiological sample, the IDP's grading was comparable...

  12. Automated system for acquisition and image processing for the control and monitoring boned nopal

    Science.gov (United States)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  13. The use of the Kalman filter in the automated segmentation of EIT lung images.

    Science.gov (United States)

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  14. The use of the Kalman filter in the automated segmentation of EIT lung images

    International Nuclear Information System (INIS)

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging. (paper)

  15. Advances in hardware, software, and automation for 193nm aerial image measurement systems

    Science.gov (United States)

    Zibold, Axel M.; Schmid, R.; Seyfarth, A.; Waechter, M.; Harnisch, W.; Doornmalen, H. v.

    2005-05-01

    A new, second generation AIMS fab 193 system has been developed which is capable of emulating lithographic imaging of any type of reticles such as binary and phase shift masks (PSM) including resolution enhancement technologies (RET) such as optical proximity correction (OPC) or scatter bars. The system emulates the imaging process by adjustment of the lithography equivalent illumination and imaging conditions of 193nm wafer steppers including circular, annular, dipole and quadrupole type illumination modes. The AIMS fab 193 allows a rapid prediction of wafer printability of critical mask features, including dense patterns and contacts, defects or repairs by acquiring through-focus image stacks by means of a CCD camera followed by quantitative image analysis. Moreover the technology can be readily applied to directly determine the process window of a given mask under stepper imaging conditions. Since data acquisition is performed electronically, AIMS in many applications replaces the need for costly and time consuming wafer prints using a wafer stepper/ scanner followed by CD SEM resist or wafer analysis. The AIMS fab 193 second generation system is designed for 193nm lithography mask printing predictability down to the 65nm node. In addition to hardware improvements a new modular AIMS software is introduced allowing for a fully automated operation mode. Multiple pre-defined points can be visited and through-focus AIMS measurements can be executed automatically in a recipe based mode. To increase the effectiveness of the automated operation mode, the throughput of the system to locate the area of interest, and to acquire the through-focus images is increased by almost a factor of two in comparison with the first generation AIMS systems. In addition a new software plug-in concept is realised for the tools. One new feature has been successfully introduced as "Global CD Map", enabling automated investigation of global mask quality based on the local determination of

  16. Microbleed detection using automated segmentation (MIDAS: a new method applicable to standard clinical MR images.

    Directory of Open Access Journals (Sweden)

    Mohamed L Seghier

    Full Text Available BACKGROUND: Cerebral microbleeds, visible on gradient-recalled echo (GRE T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. METHODOLOGY/PRINCIPAL FINDINGS: Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS identified cerebral microbleeds by explicitly incorporating an "extra" tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts. Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87. MIDAS successfully detected all patients with multiple (≥2 lobar microbleeds. CONCLUSIONS/SIGNIFICANCE: MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds.

  17. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging

    Science.gov (United States)

    Jenkins, Cesare H.; Naczynski, Dominik J.; Yu, Shu-Jung S.; Yang, Yong; Xing, Lei

    2016-09-01

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system’s unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  18. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging.

    Science.gov (United States)

    Jenkins, Cesare H; Naczynski, Dominik J; Yu, Shu-Jung S; Yang, Yong; Xing, Lei

    2016-09-01

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system's unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  19. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  20. Automated measurement of CT noise in patient images with a novel structure coherence feature

    International Nuclear Information System (INIS)

    While the assessment of CT noise constitutes an important task for the optimization of scan protocols in clinical routine, the majority of noise measurements in practice still rely on manual operation, hence limiting their efficiency and reliability. This study presents an algorithm for the automated measurement of CT noise in patient images with a novel structure coherence feature. The proposed algorithm consists of a four-step procedure including subcutaneous fat tissue selection, the calculation of structure coherence feature, the determination of homogeneous ROIs, and the estimation of the average noise level. In an evaluation with 94 CT scans (16 517 images) of pediatric and adult patients along with the participation of two radiologists, ROIs were placed on a homogeneous fat region at 99.46% accuracy, and the agreement of the automated noise measurements with the radiologists’ reference noise measurements (PCC  =  0.86) was substantially higher than the within and between-rater agreements of noise measurements (PCCwithin  =  0.75, PCCbetween  =  0.70). In addition, the absolute noise level measurements matched closely the theoretical noise levels generated by a reduced-dose simulation technique. Our proposed algorithm has the potential to be used for examining the appropriateness of radiation dose and the image quality of CT protocols for research purposes as well as clinical routine. (paper)

  1. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    Science.gov (United States)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  2. Automated measurement of CT noise in patient images with a novel structure coherence feature

    Science.gov (United States)

    Chun, Minsoo; Choi, Young Hun; Hyo Kim, Jong

    2015-12-01

    While the assessment of CT noise constitutes an important task for the optimization of scan protocols in clinical routine, the majority of noise measurements in practice still rely on manual operation, hence limiting their efficiency and reliability. This study presents an algorithm for the automated measurement of CT noise in patient images with a novel structure coherence feature. The proposed algorithm consists of a four-step procedure including subcutaneous fat tissue selection, the calculation of structure coherence feature, the determination of homogeneous ROIs, and the estimation of the average noise level. In an evaluation with 94 CT scans (16 517 images) of pediatric and adult patients along with the participation of two radiologists, ROIs were placed on a homogeneous fat region at 99.46% accuracy, and the agreement of the automated noise measurements with the radiologists’ reference noise measurements (PCC  =  0.86) was substantially higher than the within and between-rater agreements of noise measurements (PCCwithin  =  0.75, PCCbetween  =  0.70). In addition, the absolute noise level measurements matched closely the theoretical noise levels generated by a reduced-dose simulation technique. Our proposed algorithm has the potential to be used for examining the appropriateness of radiation dose and the image quality of CT protocols for research purposes as well as clinical routine.

  3. Automated segmentation of oral mucosa from wide-field OCT images (Conference Presentation)

    Science.gov (United States)

    Goldan, Ryan N.; Lee, Anthony M. D.; Cahill, Lucas; Liu, Kelly; MacAulay, Calum; Poh, Catherine F.; Lane, Pierre

    2016-03-01

    Optical Coherence Tomography (OCT) can discriminate morphological tissue features important for oral cancer detection such as the presence or absence of basement membrane and epithelial thickness. We previously reported an OCT system employing a rotary-pullback catheter capable of in vivo, rapid, wide-field (up to 90 x 2.5mm2) imaging in the oral cavity. Due to the size and complexity of these OCT data sets, rapid automated image processing software that immediately displays important tissue features is required to facilitate prompt bed-side clinical decisions. We present an automated segmentation algorithm capable of detecting the epithelial surface and basement membrane in 3D OCT images of the oral cavity. The algorithm was trained using volumetric OCT data acquired in vivo from a variety of tissue types and histology-confirmed pathologies spanning normal through cancer (8 sites, 21 patients). The algorithm was validated using a second dataset of similar size and tissue diversity. We demonstrate application of the algorithm to an entire OCT volume to map epithelial thickness, and detection of the basement membrane, over the tissue surface. These maps may be clinically useful for delineating pre-surgical tumor margins, or for biopsy site guidance.

  4. Automated detection of regions of interest for tissue microarray experiments: an image texture analysis

    Directory of Open Access Journals (Sweden)

    Tözeren Aydin

    2007-03-01

    Full Text Available Abstract Background Recent research with tissue microarrays led to a rapid progress toward quantifying the expressions of large sets of biomarkers in normal and diseased tissue. However, standard procedures for sampling tissue for molecular profiling have not yet been established. Methods This study presents a high throughput analysis of texture heterogeneity on breast tissue images for the purpose of identifying regions of interest in the tissue for molecular profiling via tissue microarray technology. Image texture of breast histology slides was described in terms of three parameters: the percentage of area occupied in an image block by chromatin (B, percentage occupied by stroma-like regions (P, and a statistical heterogeneity index H commonly used in image analysis. Texture parameters were defined and computed for each of the thousands of image blocks in our dataset using both the gray scale and color segmentation. The image blocks were then classified into three categories using the texture feature parameters in a novel statistical learning algorithm. These categories are as follows: image blocks specific to normal breast tissue, blocks specific to cancerous tissue, and those image blocks that are non-specific to normal and disease states. Results Gray scale and color segmentation techniques led to identification of same regions in histology slides as cancer-specific. Moreover the image blocks identified as cancer-specific belonged to those cell crowded regions in whole section image slides that were marked by two pathologists as regions of interest for further histological studies. Conclusion These results indicate the high efficiency of our automated method for identifying pathologic regions of interest on histology slides. Automation of critical region identification will help minimize the inter-rater variability among different raters (pathologists as hundreds of tumors that are used to develop an array have typically been evaluated

  5. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.;

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  6. Automated Image Segmentation And Characterization Technique For Effective Isolation And Representation Of Human Face

    Directory of Open Access Journals (Sweden)

    Rajesh Reddy N

    2014-01-01

    Full Text Available In areas such as defense and forensics, it is necessary to identify the face of the criminals from the already available database. Automated face recognition system involves face isolation, feature extraction and classification technique. Challenges in face recognition system are isolating the face effectively as it may be affected by illumination, posture and variation in skin color. Hence it is necessary to develop an effective algorithm that isolates face from the image. In this paper, advanced face isolation technique and feature extraction technique has been proposed.

  7. Automated aortic calcification detection in low-dose chest CT images

    Science.gov (United States)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  8. AI (artificial intelligence) in histopathology--from image analysis to automated diagnosis.

    Science.gov (United States)

    Kayser, Klaus; Görtler, Jürgen; Bogovac, Milica; Bogovac, Aleksandar; Goldmann, Torsten; Vollmer, Ekkehard; Kayser, Gian

    2009-01-01

    The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all) fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures) and pixel based (texture) measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and continuous

  9. AI (artificial intelligence in histopathology--from image analysis to automated diagnosis.

    Directory of Open Access Journals (Sweden)

    Aleksandar Bogovac

    2010-02-01

    Full Text Available The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures and pixel based (texture measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and

  10. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas

    Science.gov (United States)

    Alexander, Nathan S.; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-01-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE. PMID:26309765

  11. Automating the Analysis of Spatial Grids A Practical Guide to Data Mining Geospatial Images for Human & Environmental Applications

    CERN Document Server

    Lakshmanan, Valliappa

    2012-01-01

    The ability to create automated algorithms to process gridded spatial data is increasingly important as remotely sensed datasets increase in volume and frequency. Whether in business, social science, ecology, meteorology or urban planning, the ability to create automated applications to analyze and detect patterns in geospatial data is increasingly important. This book provides students with a foundation in topics of digital image processing and data mining as applied to geospatial datasets. The aim is for readers to be able to devise and implement automated techniques to extract information from spatial grids such as radar, satellite or high-resolution survey imagery.

  12. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  13. Evaluation of a content-based retrieval system for blood cell images with automated methods.

    Science.gov (United States)

    Seng, Woo Chaw; Mirisaee, Seyed Hadi

    2011-08-01

    Content-based image retrieval techniques have been extensively studied for the past few years. With the growth of digital medical image databases, the demand for content-based analysis and retrieval tools has been increasing remarkably. Blood cell image is a key diagnostic tool for hematologists. An automated system that can retrieved relevant blood cell images correctly and efficiently would save the effort and time of hematologists. The purpose of this work is to develop such a content-based image retrieval system. Global color histogram and wavelet-based methods are used in the prototype. The system allows users to search by providing a query image and select one of four implemented methods. The obtained results demonstrate the proposed extended query refinement has the potential to capture a user's high level query and perception subjectivity by dynamically giving better query combinations. Color-based methods performed better than wavelet-based methods with regard to precision, recall rate and retrieval time. Shape and density of blood cells are suggested as measurements for future improvement. The system developed is useful for undergraduate education. PMID:20703533

  14. Knee x-ray image analysis method for automated detection of osteoarthritis.

    Science.gov (United States)

    Shamir, Lior; Ling, Shari M; Scott, William W; Bos, Angelo; Orlov, Nikita; Macura, Tomasz J; Eckley, D Mark; Ferrucci, Luigi; Goldberg, Ilya G

    2009-02-01

    We describe a method for automated detection of radiographic osteoarthritis (OA) in knee X-ray images. The detection is based on the Kellgren-Lawrence (KL) classification grades, which correspond to the different stages of OA severity. The classifier was built using manually classified X-rays, representing the first four KL grades (normal, doubtful, minimal, and moderate). Image analysis is performed by first identifying a set of image content descriptors and image transforms that are informative for the detection of OA in the X-rays and assigning weights to these image features using Fisher scores. Then, a simple weighted nearest neighbor rule is used in order to predict the KL grade to which a given test X-ray sample belongs. The dataset used in the experiment contained 350 X-ray images classified manually by their KL grades. Experimental results show that moderate OA (KL grade 3) and minimal OA (KL grade 2) can be differentiated from normal cases with accuracy of 91.5% and 80.4%, respectively. Doubtful OA (KL grade 1) was detected automatically with a much lower accuracy of 57%. The source code developed and used in this study is available for free download at www.openmicroscopy.org. PMID:19342330

  15. Automated segmentation of murine lung tumors in x-ray micro-CT images

    Science.gov (United States)

    Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis

    2014-03-01

    Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.

  16. Automated model-based bias field correction of MR images of the brain.

    Science.gov (United States)

    Van Leemput, K; Maes, F; Vandermeulen, D; Suetens, P

    1999-10-01

    We propose a model-based method for fully automated bias field correction of MR brain images. The MR signal is modeled as a realization of a random process with a parametric probability distribution that is corrupted by a smooth polynomial inhomogeneity or bias field. The method we propose applies an iterative expectation-maximization (EM) strategy that interleaves pixel classification with estimation of class distribution and bias field parameters, improving the likelihood of the model parameters at each iteration. The algorithm, which can handle multichannel data and slice-by-slice constant intensity offsets, is initialized with information from a digital brain atlas about the a priori expected location of tissue classes. This allows full automation of the method without need for user interaction, yielding more objective and reproducible results. We have validated the bias correction algorithm on simulated data and we illustrate its performance on various MR images with important field inhomogeneities. We also relate the proposed algorithm to other bias correction algorithms. PMID:10628948

  17. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    Directory of Open Access Journals (Sweden)

    Marcin Andrzej KUREK

    2015-01-01

    Full Text Available Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC were conducted. The particles were measured at two points: dry and after water soaking. The most significant water holding capacity (7.00 g water/g solid was achieved by the smaller sized oat fiber. Conversely, the water holding capacity was highest (4.20 g water/g solid in larger sized beet fiber. There was evidence for water absorption increasing with a decrease in particle size in regards to the same fiber source. Very strong correlations were drawn between particle shape parameters, such as fiber length, straightness, width and hydration properties measured conventionally. The regression analysis provided the opportunity to estimate whether the automated static image analysis method could be an efficient tool in describing the hydration properties of dietary fiber. The application of the method was validated using mathematical model which was verified in comparison to conventional WHC measurement results.

  18. Fully automated image-guided needle insertion: application to small animal biopsies.

    Science.gov (United States)

    Ayadi, A; Bour, G; Aprahamian, M; Bayle, B; Graebling, P; Gangloff, J; Soler, L; Egly, J M; Marescaux, J

    2007-01-01

    The study of biological process evolution in small animals requires time-consuming and expansive analyses of a large population of animals. Serial analyses of the same animal is potentially a great alternative. However non-invasive procedures must be set up, to retrieve valuable tissue samples from precisely defined areas in living animals. Taking advantage of the high resolution level of in vivo molecular imaging, we defined a procedure to perform image-guided needle insertion and automated biopsy using a micro CT-scan, a robot and a vision system. Workspace limitations in the scanner require the animal to be removed and laid in front of the robot. A vision system composed of a grid projector and a camera is used to register the designed animal-bed with to respect to the robot and to calibrate automatically the needle position and orientation. Automated biopsy is then synchronised with respiration and performed with a pneumatic translation device, at high velocity, to minimize organ deformation. We have experimentally tested our biopsy system with different needles.

  19. Quantitative Assessment of Mouse Mammary Gland Morphology Using Automated Digital Image Processing and TEB Detection.

    Science.gov (United States)

    Blacher, Silvia; Gérard, Céline; Gallez, Anne; Foidart, Jean-Michel; Noël, Agnès; Péqueux, Christel

    2016-04-01

    The assessment of rodent mammary gland morphology is largely used to study the molecular mechanisms driving breast development and to analyze the impact of various endocrine disruptors with putative pathological implications. In this work, we propose a methodology relying on fully automated digital image analysis methods including image processing and quantification of the whole ductal tree and of the terminal end buds as well. It allows to accurately and objectively measure both growth parameters and fine morphological glandular structures. Mammary gland elongation was characterized by 2 parameters: the length and the epithelial area of the ductal tree. Ductal tree fine structures were characterized by: 1) branch end-point density, 2) branching density, and 3) branch length distribution. The proposed methodology was compared with quantification methods classically used in the literature. This procedure can be transposed to several software and thus largely used by scientists studying rodent mammary gland morphology. PMID:26910307

  20. Automated image analysis of the host-pathogen interaction between phagocytes and Aspergillus fumigatus.

    Directory of Open Access Journals (Sweden)

    Franziska Mech

    Full Text Available Aspergillus fumigatus is a ubiquitous airborne fungus and opportunistic human pathogen. In immunocompromised hosts, the fungus can cause life-threatening diseases like invasive pulmonary aspergillosis. Since the incidence of fungal systemic infections drastically increased over the last years, it is a major goal to investigate the pathobiology of A. fumigatus and in particular the interactions of A. fumigatus conidia with immune cells. Many of these studies include the activity of immune effector cells, in particular of macrophages, when they are confronted with conidia of A. fumigus wild-type and mutant strains. Here, we report the development of an automated analysis of confocal laser scanning microscopy images from macrophages coincubated with different A. fumigatus strains. At present, microscopy images are often analysed manually, including cell counting and determination of interrelations between cells, which is very time consuming and error-prone. Automation of this process overcomes these disadvantages and standardises the analysis, which is a prerequisite for further systems biological studies including mathematical modeling of the infection process. For this purpose, the cells in our experimental setup were differentially stained and monitored by confocal laser scanning microscopy. To perform the image analysis in an automatic fashion, we developed a ruleset that is generally applicable to phagocytosis assays and in the present case was processed by the software Definiens Developer XD. As a result of a complete image analysis we obtained features such as size, shape, number of cells and cell-cell contacts. The analysis reported here, reveals that different mutants of A. fumigatus have a major influence on the ability of macrophages to adhere and to phagocytose the respective conidia. In particular, we observe that the phagocytosis ratio and the aggregation behaviour of pksP mutant compared to wild-type conidia are both significantly

  1. An automated voxelized dosimetry tool for radionuclide therapy based on serial quantitative SPECT/CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, Price A.; Kron, Tomas [Department of Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne 3002 (Australia); Beauregard, Jean-Mathieu [Department of Radiology, Université Laval, Quebec City G1V 0A6 (Canada); Hofman, Michael S.; Hogg, Annette; Hicks, Rodney J. [Department of Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne 3002 (Australia)

    2013-11-15

    Purpose: To create an accurate map of the distribution of radiation dose deposition in healthy and target tissues during radionuclide therapy.Methods: Serial quantitative SPECT/CT images were acquired at 4, 24, and 72 h for 28 {sup 177}Lu-octreotate peptide receptor radionuclide therapy (PRRT) administrations in 17 patients with advanced neuroendocrine tumors. Deformable image registration was combined with an in-house programming algorithm to interpolate pharmacokinetic uptake and clearance at a voxel level. The resultant cumulated activity image series are comprised of values representing the total number of decays within each voxel's volume. For PRRT, cumulated activity was translated to absorbed dose based on Monte Carlo-determined voxel S-values at a combination of long and short ranges. These dosimetric image sets were compared for mean radiation absorbed dose to at-risk organs using a conventional MIRD protocol (OLINDA 1.1).Results: Absorbed dose values to solid organs (liver, kidneys, and spleen) were within 10% using both techniques. Dose estimates to marrow were greater using the voxelized protocol, attributed to the software incorporating crossfire effect from nearby tumor volumes.Conclusions: The technique presented offers an efficient, automated tool for PRRT dosimetry based on serial post-therapy imaging. Following retrospective analysis, this method of high-resolution dosimetry may allow physicians to prescribe activity based on required dose to tumor volume or radiation limits to healthy tissue in individual patients.

  2. Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images

    Directory of Open Access Journals (Sweden)

    Pachiyappan Arulmozhivarman

    2012-06-01

    Full Text Available Abstract We describe a system for the automated diagnosis of diabetic retinopathy and glaucoma using fundus and optical coherence tomography (OCT images. Automatic screening will help the doctors to quickly identify the condition of the patient in a more accurate way. The macular abnormalities caused due to diabetic retinopathy can be detected by applying morphological operations, filters and thresholds on the fundus images of the patient. Early detection of glaucoma is done by estimating the Retinal Nerve Fiber Layer (RNFL thickness from the OCT images of the patient. The RNFL thickness estimation involves the use of active contours based deformable snake algorithm for segmentation of the anterior and posterior boundaries of the retinal nerve fiber layer. The algorithm was tested on a set of 89 fundus images of which 85 were found to have at least mild retinopathy and OCT images of 31 patients out of which 13 were found to be glaucomatous. The accuracy for optical disk detection is found to be 97.75%. The proposed system therefore is accurate, reliable and robust and can be realized.

  3. Long-term live cell imaging and automated 4D analysis of drosophila neuroblast lineages.

    Directory of Open Access Journals (Sweden)

    Catarina C F Homem

    Full Text Available The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain.

  4. RootAnalyzer: A Cross-Section Image Analysis Tool for Automated Characterization of Root Cells and Tissues

    OpenAIRE

    Joshua Chopin; Hamid Laga; Chun Yuan Huang; Sigrid Heuer; Miklavcic, Stanley J.

    2015-01-01

    The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processi...

  5. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  6. Automated detection and labeling of high-density EEG electrodes from structural MR images

    Science.gov (United States)

    Marino, Marco; Liu, Quanying; Brem, Silvia; Wenderoth, Nicole; Mantini, Dante

    2016-10-01

    Objective. Accurate knowledge about the positions of electrodes in electroencephalography (EEG) is very important for precise source localizations. Direct detection of electrodes from magnetic resonance (MR) images is particularly interesting, as it is possible to avoid errors of co-registration between electrode and head coordinate systems. In this study, we propose an automated MR-based method for electrode detection and labeling, particularly tailored to high-density montages. Approach. Anatomical MR images were processed to create an electrode-enhanced image in individual space. Image processing included intensity non-uniformity correction, background noise and goggles artifact removal. Next, we defined a search volume around the head where electrode positions were detected. Electrodes were identified as local maxima in the search volume and registered to the Montreal Neurological Institute standard space using an affine transformation. This allowed the matching of the detected points with the specific EEG montage template, as well as their labeling. Matching and labeling were performed by the coherent point drift method. Our method was assessed on 8 MR images collected in subjects wearing a 256-channel EEG net, using the displacement with respect to manually selected electrodes as performance metric. Main results. Average displacement achieved by our method was significantly lower compared to alternative techniques, such as the photogrammetry technique. The maximum displacement was for more than 99% of the electrodes lower than 1 cm, which is typically considered an acceptable upper limit for errors in electrode positioning. Our method showed robustness and reliability, even in suboptimal conditions, such as in the case of net rotation, imprecisely gathered wires, electrode detachment from the head, and MR image ghosting. Significance. We showed that our method provides objective, repeatable and precise estimates of EEG electrode coordinates. We hope our work

  7. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  8. Semi-automated scar detection in delayed enhanced cardiac magnetic resonance images

    Science.gov (United States)

    Morisi, Rita; Donini, Bruno; Lanconelli, Nico; Rosengarden, James; Morgan, John; Harden, Stephen; Curzen, Nick

    2015-06-01

    Late enhancement cardiac magnetic resonance images (MRI) has the ability to precisely delineate myocardial scars. We present a semi-automated method for detecting scars in cardiac MRI. This model has the potential to improve routine clinical practice since quantification is not currently offered due to time constraints. A first segmentation step was developed for extracting the target regions for potential scar and determining pre-candidate objects. Pattern recognition methods are then applied to the segmented images in order to detect the position of the myocardial scar. The database of late gadolinium enhancement (LE) cardiac MR images consists of 111 blocks of images acquired from 63 patients at the University Hospital Southampton NHS Foundation Trust (UK). At least one scar was present for each patient, and all the scars were manually annotated by an expert. A group of images (around one third of the entire set) was used for training the system which was subsequently tested on all the remaining images. Four different classifiers were trained (Support Vector Machine (SVM), k-nearest neighbor (KNN), Bayesian and feed-forward neural network) and their performance was evaluated by using Free response Receiver Operating Characteristic (FROC) analysis. Feature selection was implemented for analyzing the importance of the various features. The segmentation method proposed allowed the region affected by the scar to be extracted correctly in 96% of the blocks of images. The SVM was shown to be the best classifier for our task, and our system reached an overall sensitivity of 80% with less than 7 false positives per patient. The method we present provides an effective tool for detection of scars on cardiac MRI. This may be of value in clinical practice by permitting routine reporting of scar quantification.

  9. Automated Analysis of {sup 123}I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-03-15

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-{sup 123}I-iodophenyl)tropane ({sup 123}I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional {sup 123}I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease.

  10. Computerized detection of breast cancer on automated breast ultrasound imaging of women with dense breasts

    Energy Technology Data Exchange (ETDEWEB)

    Drukker, Karen, E-mail: kdrukker@uchicago.edu; Sennett, Charlene A.; Giger, Maryellen L. [Department of Radiology, MC2026, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2014-01-15

    Purpose: Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. Methods: The HIPAA compliant study involved a dataset of volumetric ultrasound image data, “views,” acquired with an automated U-Systems Somo•V{sup ®} ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patients (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of “marks” (detections) per view. Results: At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2—similar to radiologists’ performance sensitivity (49.9%) for this dataset from a prior reader study—and 45.9% (28/61) ± 4% for all patients. Conclusions: Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.

  11. A portable fluorescence spectroscopy imaging system for automated root phenotyping in soil cores in the field.

    Science.gov (United States)

    Wasson, Anton; Bischof, Leanne; Zwart, Alec; Watt, Michelle

    2016-02-01

    Root architecture traits are a target for pre-breeders. Incorporation of root architecture traits into new cultivars requires phenotyping. It is attractive to rapidly and directly phenotype root architecture in the field, avoiding laboratory studies that may not translate to the field. A combination of soil coring with a hydraulic push press and manual core-break counting can directly phenotype root architecture traits of depth and distribution in the field through to grain development, but large teams of people are required and labour costs are high with this method. We developed a portable fluorescence imaging system (BlueBox) to automate root counting in soil cores with image analysis software directly in the field. The lighting system was optimized to produce high-contrast images of roots emerging from soil cores. The correlation of the measurements with the root length density of the soil cores exceeded the correlation achieved by human operator measurements (R (2)=0.68 versus 0.57, respectively). A BlueBox-equipped team processed 4.3 cores/hour/person, compared with 3.7 cores/hour/person for the manual method. The portable, automated in-field root architecture phenotyping system was 16% more labour efficient, 19% more accurate, and 12% cheaper than manual conventional coring, and presents an opportunity to directly phenotype root architecture in the field as part of pre-breeding programs. The platform has wide possibilities to capture more information about root health and other root traits in the field. PMID:26826219

  12. A portable fluorescence spectroscopy imaging system for automated root phenotyping in soil cores in the field

    Science.gov (United States)

    Wasson, Anton; Bischof, Leanne; Zwart, Alec; Watt, Michelle

    2016-01-01

    Root architecture traits are a target for pre-breeders. Incorporation of root architecture traits into new cultivars requires phenotyping. It is attractive to rapidly and directly phenotype root architecture in the field, avoiding laboratory studies that may not translate to the field. A combination of soil coring with a hydraulic push press and manual core-break counting can directly phenotype root architecture traits of depth and distribution in the field through to grain development, but large teams of people are required and labour costs are high with this method. We developed a portable fluorescence imaging system (BlueBox) to automate root counting in soil cores with image analysis software directly in the field. The lighting system was optimized to produce high-contrast images of roots emerging from soil cores. The correlation of the measurements with the root length density of the soil cores exceeded the correlation achieved by human operator measurements (R 2=0.68 versus 0.57, respectively). A BlueBox-equipped team processed 4.3 cores/hour/person, compared with 3.7 cores/hour/person for the manual method. The portable, automated in-field root architecture phenotyping system was 16% more labour efficient, 19% more accurate, and 12% cheaper than manual conventional coring, and presents an opportunity to directly phenotype root architecture in the field as part of pre-breeding programs. The platform has wide possibilities to capture more information about root health and other root traits in the field. PMID:26826219

  13. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  14. High-throughput automated home-cage mesoscopic functional imaging of mouse cortex.

    Science.gov (United States)

    Murphy, Timothy H; Boyd, Jamie D; Bolaños, Federico; Vanni, Matthieu P; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M

    2016-01-01

    Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514

  15. Vision 20/20: Perspectives on automated image segmentation for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, Gregory, E-mail: gcsharp@partners.org; Fritscher, Karl D.; Shusharina, Nadya [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Pekar, Vladimir [Philips Healthcare, Markham, Ontario 6LC 2S3 (Canada); Peroni, Marta [Center for Proton Therapy, Paul Scherrer Institut, 5232 Villigen-PSI (Switzerland); Veeraraghavan, Harini [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Yang, Jinzhong [Department of Radiation Physics, MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2014-05-15

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods’ strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology.

  16. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  17. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  18. Automated parameterisation for multi-scale image segmentation on multiple layers

    Science.gov (United States)

    Drăguţ, L.; Csillik, O.; Eisank, C.; Tiede, D.

    2014-01-01

    We introduce a new automated approach to parameterising multi-scale image segmentation of multiple layers, and we implemented it as a generic tool for the eCognition® software. This approach relies on the potential of the local variance (LV) to detect scale transitions in geospatial data. The tool detects the number of layers added to a project and segments them iteratively with a multiresolution segmentation algorithm in a bottom-up approach, where the scale factor in the segmentation, namely, the scale parameter (SP), increases with a constant increment. The average LV value of the objects in all of the layers is computed and serves as a condition for stopping the iterations: when a scale level records an LV value that is equal to or lower than the previous value, the iteration ends, and the objects segmented in the previous level are retained. Three orders of magnitude of SP lags produce a corresponding number of scale levels. Tests on very high resolution imagery provided satisfactory results for generic applicability. The tool has a significant potential for enabling objectivity and automation of GEOBIA analysis. PMID:24748723

  19. Semi-automated procedures for shoreline extraction using single RADARSAT-1 SAR image

    Science.gov (United States)

    Al Fugura, A.'kif; Billa, Lawal; Pradhan, Biswajeet

    2011-12-01

    Coastline identification is important for surveying and mapping reasons. Coastline serves as the basic point of reference and is used on nautical charts for navigation purposes. Its delineation has become crucial and more important in the wake of the many recent earthquakes and tsunamis resulting in complete change and redraw of some shorelines. In a tropical country like Malaysia, presence of cloud cover hinders the application of optical remote sensing data. In this study a semi-automated technique and procedures are presented for shoreline delineation from RADARSAT-1 image. A scene of RADARSAT-1 satellite image was processed using enhanced filtering technique to identify and extract the shoreline coast of Kuala Terengganu, Malaysia. RADSARSAT image has many advantages over the optical data because of its ability to penetrate cloud cover and its night sensing capabilities. At first, speckles were removed from the image by using Lee sigma filter which was used to reduce random noise and to enhance the image and discriminate the boundary between land and water. The results showed an accurate and improved extraction and delineation of the entire coastline of Kuala Terrenganu. The study demonstrated the reliability of the image averaging filter in reducing random noise over the sea surface especially near the shoreline. It enhanced land-water boundary differentiation, enabling better delineation of the shoreline. Overall, the developed techniques showed the potential of radar imagery for accurate shoreline mapping and will be useful for monitoring shoreline changes during high and low tides as well as shoreline erosion in a tropical country like Malaysia.

  20. Automated Detection of Coronal Mass Ejections in STEREO Heliospheric Imager data

    CERN Document Server

    Pant, V; Rodriguez, L; Mierla, M; Banerjee, D; Davies, J A

    2016-01-01

    We have performed, for the first time, the successful automated detection of Coronal Mass Ejections (CMEs) in data from the inner heliospheric imager (HI-1) cameras on the STEREO A spacecraft. Detection of CMEs is done in time-height maps based on the application of the Hough transform, using a modified version of the CACTus software package, conventionally applied to coronagraph data. In this paper we describe the method of detection. We present the result of the application of the technique to a few CMEs that are well detected in the HI-1 imagery, and compare these results with those based on manual cataloging methodologies. We discuss in detail the advantages and disadvantages of this method.

  1. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Science.gov (United States)

    Collette, R.; King, J.; Buesch, C.; Keiser, D. D.; Williams, W.; Miller, B. D.; Schulthess, J.

    2016-07-01

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program.

  2. Automated centreline extraction of neuronal dendrite from optical microscopy image stacks

    Science.gov (United States)

    Xiao, Liang; Zhang, Fanbiao

    2010-11-01

    In this work we present a novel vision-based pipeline for automated skeleton detection and centreline extraction of neuronal dendrite from optical microscopy image stacks. The proposed pipeline is an integrated solution that merges image stacks pre-processing, the seed points detection, ridge traversal procedure, minimum spanning tree optimization and tree trimming into to a unified framework to deal with the challenge problem. In image stacks preprocessing, we first apply a curvelet transform based shrinkage and cycle spinning technique to remove the noise. This is followed by the adaptive threshold method to compute the result of neuronal object segmentation, and the 3D distance transformation is performed to get the distance map. According to the eigenvalues and eigenvectors of the Hessian matrix, the skeleton seed points are detected. Staring from the seed points, the initial centrelines are obtained using ridge traversal procedure. After that, we use minimum spanning tree to organize the geometrical structure of the skeleton points, and then we use graph trimming post-processing to compute the final centreline. Experimental results on different datasets demonstrate that our approach has high reliability, good robustness and requires less user interaction.

  3. Automated torso organ segmentation from 3D CT images using conditional random field

    Science.gov (United States)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2016-03-01

    This paper presents a segmentation method for torso organs using conditional random field (CRF) from medical images. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. In this paper, we propose an organ segmentation method using structured output learning which is based on probabilistic graphical model. The proposed method utilizes CRF on three-dimensional grids as probabilistic graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weight parameters of the CRF using stochastic gradient descent algorithm and estimate organ labels for a given image by maximum a posteriori (MAP) estimation. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 6.6%. The DICE coefficients of right lung, left lung, heart, liver, spleen, right kidney, and left kidney are 0.94, 0.92, 0.65, 0.67, 0.36, 0.38, and 0.37, respectively.

  4. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    Science.gov (United States)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  5. New technologies for automated cell counting based on optical image analysis ;The Cellscreen'.

    Science.gov (United States)

    Brinkmann, Marlies; Lütkemeyer, Dirk; Gudermann, Frank; Lehmann, Jürgen

    2002-01-01

    A prototype of a newly developed apparatus for measuring cell growth characteristics of suspension cells in micro titre plates over a period of time was examined. Fully automated non-invasive cell counts in small volume cultivation vessels, e.g. 96 well plates, were performed with the Cellscreen system by Innovatis AG, Germany. The system automatically generates microscopic images of suspension cells which had sedimented on the base of the well plate. The total cell number and cell geometry was analysed without staining or sampling using the Cedex image recognition technology. Thus, time course studies of cell growth with the identical culture became possible. Basic parameters like the measurement range, the minimum number of images which were required for statistically reliable results, as well as the influence of the measurement itself and the effect of evaporation in 96 well plates on cell proliferation were determined. A comparison with standard methods including the influence of the cultured volume per well (25 mul to 200 mul) on cell growth was performed. Furthermore, the toxic substances ammonia, lactate and butyrate were used to show that the Cellscreen system is able to detect even the slightest changes in the specific growth rate. PMID:19003093

  6. Semi-automated porosity identification from thin section images using image analysis and intelligent discriminant classifiers

    Science.gov (United States)

    Ghiasi-Freez, Javad; Soleimanpour, Iman; Kadkhodaie-Ilkhchi, Ali; Ziaii, Mansur; Sedighi, Mahdi; Hatampour, Amir

    2012-08-01

    Identification of different types of porosity within a reservoir rock is a functional parameter for reservoir characterization since various pore types play different roles in fluid transport and also, the pore spaces determine the fluid storage capacity of the reservoir. The present paper introduces a model for semi-automatic identification of porosity types within thin section images. To get this goal, a pattern recognition algorithm is followed. Firstly, six geometrical shape parameters of sixteen largest pores of each image are extracted using image analysis techniques. The extracted parameters and their corresponding pore types of 294 pores are used for training two intelligent discriminant classifiers, namely linear and quadratic discriminant analysis. The trained classifiers take the geometrical features of the pores to identify the type and percentage of five types of porosity, including interparticle, intraparticle, oomoldic, biomoldic, and vuggy in each image. The accuracy of classifiers is determined from two standpoints. Firstly, the predicted and measured percentages of each type of porosity are compared with each other. The results indicate reliable performance for predicting percentage of each type of porosity. In the second step, the precisions of classifiers for categorizing the pore spaces are analyzed. The classifiers also took a high acceptance score when used for individual recognition of pore spaces. The proposed methodology is a further promising application for petroleum geologists allowing statistical study of pore types in a rapid and accurate way.

  7. Automated optical image correlation to constrain dynamics of slow-moving landslides

    Science.gov (United States)

    Mackey, B. H.; Roering, J. J.; Lamb, M. P.

    2011-12-01

    Large, slow-moving landslides can dominate sediment flux from mountainous terrain, yet their long-term spatio-temporal behavior at the landscape scale is not well understood. Movement can be inconspicuous, episodic, persist for decades, and is challenging and time consuming to quantify using traditional methods such as stereo photogrammetry or field surveying. In the absence of large datasets documenting the movement of slow-moving landslides, we are challenged to isolate the key variables that control their movement and evolution. This knowledge gap hampers our understanding of landslide processes, landslide hazard, sediment budgets, and landscape evolution. Here we document the movement of numerous slow-moving landslides along the Eel River, northern California. These glacier-like landslides (earthflows) move seasonally (typically 1-2 m/yr), with minimal surface deformation, such that scattered shrubs can grow on the landslide surface for decades. Previous work focused on manually tracking the position of individual features (trees, rocks) on photos and LiDAR-derived digital topography to identify the extent of landslide activity. Here, we employ sub-pixel change detection software (COSI-Corr) to generate automated maps of landslide displacement by correlating successive orthorectified photos. Through creation of a detailed multi-temporal deformation field across the entire landslide surface, COSI-Corr is able to delineate zones of movement, quantify displacement, and identify domains of flow convergence and divergence. The vegetation and fine-scale landslide morphology provide excellent texture for automated comparison between successive orthorectified images, although decorrelation can occur in areas where translation between images is greater than the specified search window, or where intense ground deformation or vegetation change occurs. We automatically detected movement on dozens of active landslides (with landslide extent and displacement confirmed by

  8. AUTOMATED CLASSIFICATION AND SEGREGATION OF BRAIN MRI IMAGES INTO IMAGES CAPTURED WITH RESPECT TO VENTRICULAR REGION AND EYE-BALL REGION

    Directory of Open Access Journals (Sweden)

    C. Arunkumar

    2014-05-01

    Full Text Available Magnetic Resonance Imaging (MRI images of the brain are used for detection of various brain diseases including tumor. In such cases, classification of MRI images captured with respect to ventricular and eye ball regions helps in automated location and classification of such diseases. The methods employed in the paper can segregate the given MRI images of brain into images of brain captured with respect to ventricular region and images of brain captured with respect to eye ball region. First, the given MRI image of brain is segmented using Particle Swarm Optimization (PSO algorithm, which is an optimized algorithm for MRI image segmentation. The algorithm proposed in the paper is then applied on the segmented image. The algorithm detects whether the image consist of a ventricular region or an eye ball region and classifies it accordingly.

  9. Myocardial Perfusion: Near-automated Evaluation from Contrast-enhanced MR Images Obtained at Rest and during Vasodilator Stress

    OpenAIRE

    Tarroni, Giacomo; Corsi, Cristiana; Antkowiak, Patrick F; Veronesi, Federico; Kramer, Christopher M.; Epstein, Frederick H; Walter, James; Lamberti, Claudio; Lang, Roberto M.; Mor-Avi, Victor; Patel, Amit R

    2012-01-01

    This study demonstrated that despite the extreme dynamic nature of contrast-enhanced cardiac MR image sequences and respiratory motion, near-automated frame-by-frame detection of myocardial segments and high-quality quantification of myocardial contrast is feasible both at rest and during vasodilator stress.

  10. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    International Nuclear Information System (INIS)

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  11. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); Rahmat, K. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); University Malaya, Biomedical Imaging Department, Kuala Lumpur (Malaysia); Ariffin, H. [University of Malaya, Department of Paediatrics, Faculty of Medicine, Kuala Lumpur (Malaysia)

    2012-07-15

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  12. Automated melanoma detection with a novel multispectral imaging system: results of a prospective study

    International Nuclear Information System (INIS)

    The aim of this research was to evaluate the performance of a new spectroscopic system in the diagnosis of melanoma. This study involves a consecutive series of 1278 patients with 1391 cutaneous pigmented lesions including 184 melanomas. In an attempt to approach the 'real world' of lesion population, a further set of 1022 not excised clinically reassuring lesions was also considered for analysis. Each lesion was imaged in vivo by a multispectral imaging system. The system operates at wavelengths between 483 and 950 nm by acquiring 15 images at equally spaced wavelength intervals. From the images, different lesion descriptors were extracted related to the colour distribution and morphology of the lesions. Data reduction techniques were applied before setting up a neural network classifier designed to perform automated diagnosis. The data set was randomly divided into three sets: train (696 lesions, including 90 melanomas) and verify (348 lesions, including 53 melanomas) for the instruction of a proper neural network, and an independent test set (347 lesions, including 41 melanomas). The neural network was able to discriminate between melanomas and non-melanoma lesions with a sensitivity of 80.4% and a specificity of 75.6% in the 1391 histologized cases data set. No major variations were found in classification scores when train, verify and test subsets were separately evaluated. Following receiver operating characteristic (ROC) analysis, the resulting area under the curve was 0.85. No significant differences were found among areas under train, verify and test set curves, supporting the good network ability to generalize for new cases. In addition, specificity and area under ROC curve increased up to 90% and 0.90, respectively, when the additional set of 1022 lesions without histology was added to the test set. Our data show that performance of an automated system is greatly population dependent, suggesting caution in the comparison with results reported in the

  13. Hyper-Cam automated calibration method for continuous hyperspectral imaging measurements

    Science.gov (United States)

    Gagnon, Jean-Philippe; Habte, Zewdu; George, Jacks; Farley, Vincent; Tremblay, Pierre; Chamberland, Martin; Romano, Joao; Rosario, Dalton

    2010-04-01

    The midwave and longwave infrared regions of the electromagnetic spectrum contain rich information which can be captured by hyperspectral sensors thus enabling enhanced detection of targets of interest. A continuous hyperspectral imaging measurement capability operated 24/7 over varying seasons and weather conditions permits the evaluation of hyperspectral imaging for detection of different types of targets in real world environments. Such a measurement site was built at Picatinny Arsenal under the Spectral and Polarimetric Imagery Collection Experiment (SPICE), where two Hyper-Cam hyperspectral imagers are installed at the Precision Armament Laboratory (PAL) and are operated autonomously since Fall of 2009. The Hyper-Cam are currently collecting a complete hyperspectral database that contains the MWIR and LWIR hyperspectral measurements of several targets under day, night, sunny, cloudy, foggy, rainy and snowy conditions. The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis capabilities using a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The MWIR version covers the 3 to 5 μm spectral range and the LWIR version covers the 8 to 12 μm spectral range. This paper describes the automated operation of the two Hyper-Cam sensors being used in the SPICE data collection. The Reveal Automation Control Software (RACS) developed collaboratively between Telops, ARDEC, and ARL enables flexible operating parameters and autonomous calibration. Under the RACS software, the Hyper-Cam sensors can autonomously calibrate itself using their internal blackbody targets, and the calibration events are initiated by user defined time intervals and on internal beamsplitter temperature monitoring. The RACS software is the first software developed for

  14. Rapid and Semi-Automated Extraction of Neuronal Cell Bodies and Nuclei from Electron Microscopy Image Stacks

    Science.gov (United States)

    Holcomb, Paul S.; Morehead, Michael; Doretto, Gianfranco; Chen, Peter; Berg, Stuart; Plaza, Stephen; Spirou, George

    2016-01-01

    Connectomics—the study of how neurons wire together in the brain—is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes. PMID:27259933

  15. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    accuracy for single 2D images being higher than 90% for the first time. In particular, the classification accuracy for the easily confused endomembrane compartments (endoplasmic reticulum, Golgi, endosomes, lysosomes was improved by 5–15%. We achieved further improvements when classification was conducted on image sets rather than on individual cell images. Conclusions The availability of accurate, fast, automated classification systems for protein location patterns in conjunction with high throughput fluorescence microscope imaging techniques enables a new subfield of proteomics, location proteomics. The accuracy and sensitivity of this approach represents an important alternative to low-resolution assignments by curation or sequence-based prediction.

  16. Development of Automated Image Analysis Tools for Verification of Radiotherapy Field Accuracy with AN Electronic Portal Imaging Device.

    Science.gov (United States)

    Dong, Lei

    1995-01-01

    The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5^ circ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1^ circ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross -correlation technique were

  17. Automated local bright feature image analysis of nuclear proteindistribution identifies changes in tissue phenotype

    Energy Technology Data Exchange (ETDEWEB)

    Knowles, David; Sudar, Damir; Bator, Carol; Bissell, Mina

    2006-02-01

    The organization of nuclear proteins is linked to cell and tissue phenotypes. When cells arrest proliferation, undergo apoptosis, or differentiate, the distribution of nuclear proteins changes. Conversely, forced alteration of the distribution of nuclear proteins modifies cell phenotype. Immunostaining and fluorescence microscopy have been critical for such findings. However, there is an increasing need for quantitative analysis of nuclear protein distribution to decipher epigenetic relationships between nuclear structure and cell phenotype, and to unravel the mechanisms linking nuclear structure and function. We have developed imaging methods to quantify the distribution of fluorescently-stained nuclear protein NuMA in different mammary phenotypes obtained using three-dimensional cell culture. Automated image segmentation of DAPI-stained nuclei was generated to isolate thousands of nuclei from three-dimensional confocal images. Prominent features of fluorescently-stained NuMA were detected using a novel local bright feature analysis technique, and their normalized spatial density calculated as a function of the distance from the nuclear perimeter to its center. The results revealed marked changes in the distribution of the density of NuMA bright features as non-neoplastic cells underwent phenotypically normal acinar morphogenesis. In contrast, we did not detect any reorganization of NuMA during the formation of tumor nodules by malignant cells. Importantly, the analysis also discriminated proliferating non-neoplastic cells from proliferating malignant cells, suggesting that these imaging methods are capable of identifying alterations linked not only to the proliferation status but also to the malignant character of cells. We believe that this quantitative analysis will have additional applications for classifying normal and pathological tissues.

  18. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells

    Science.gov (United States)

    Park, Han Sang; Rinehart, Matthew T.; Walzer, Katelyn A.; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection

  19. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells.

    Science.gov (United States)

    Park, Han Sang; Rinehart, Matthew T; Walzer, Katelyn A; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection

  20. Evaluation of automated image registration algorithm for image-guided radiotherapy (IGRT)

    International Nuclear Information System (INIS)

    The performance of an image registration (IR) software was evaluated for automatically detecting known errors simulated through the movement of ExactCouch using an onboard imager. Twenty-seven set-up errors (11 translations, 10 rotations, 6 translation and rotation) were simulated by introducing offset up to ±15 mm in three principal axes and 0° to ±1° in yaw. For every simulated error, orthogonal kV radiograph and cone beam CT were acquired in half-fan (CBCTHF) and full-fan (CBCTFF) mode. The orthogonal radiographs and CBCTs were automatically co-registered to reference digitally reconstructed radiographs (DRRs) and planning CT using 2D–2D and 3D–3D matching software based on mutual information transformation. A total of 79 image sets (ten pairs of kV X-rays and 69 session of CBCT) were analyzed to determine the (a) reproducibility of IR outcome and (b) residual error, defined as the deviation between the known and IR software detected displacement in translation and rotation. The reproducibility of automatic IR of planning CT and repeat CBCTs taken with and without kilovoltage detector and kilovoltage X-ray source arm movement was excellent with mean SD of 0.1 mm in the translation and 0.0° in rotation. The average residual errors in translation and rotation were within ±0.5 mm and ±0.2°, ±0.9 mm and ±0.3°, and ±0.4 mm and ±0.2° for setup simulated only in translation, rotation, and both translation and rotation. The mean (SD) 3D vector was largest when only translational error was simulated and was 1.7 (1.1) mm for 2D–2D match of reference DRR with radiograph, 1.4 (0.6) and 1.3 (0.5) mm for 3D–3D match of reference CT and CBCT with full fan and half fan, respectively. In conclusion, the image-guided radiation therapy (IGRT) system is accurate within 1.8 mm and 0.4° and reproducible under control condition. Inherent error from any IGRT process should be taken into account while setting clinical IGRT protocol.

  1. Primary histologic diagnosis using automated whole slide imaging: a validation study

    Directory of Open Access Journals (Sweden)

    Jukic Drazen M

    2006-04-01

    Full Text Available Abstract Background Only prototypes 5 years ago, high-speed, automated whole slide imaging (WSI systems (also called digital slide systems, virtual microscopes or wide field imagers are becoming increasingly capable and robust. Modern devices can capture a slide in 5 minutes at spatial sampling periods of less than 0.5 micron/pixel. The capacity to rapidly digitize large numbers of slides should eventually have a profound, positive impact on pathology. It is important, however, that pathologists validate these systems during development, not only to identify their limitations but to guide their evolution. Methods Three pathologists fully signed out 25 cases representing 31 parts. The laboratory information system was used to simulate real-world sign-out conditions including entering a full diagnostic field and comment (when appropriate and ordering special stains and recuts. For each case, discrepancies between diagnoses were documented by committee and a "consensus" report was formed and then compared with the microscope-based, sign-out report from the clinical archive. Results In 17 of 25 cases there were no discrepancies between the individual study pathologist reports. In 8 of the remaining cases, there were 12 discrepancies, including 3 in which image quality could be at least partially implicated. When the WSI consensus diagnoses were compared with the original sign-out diagnoses, no significant discrepancies were found. Full text of the pathologist reports, the WSI consensus diagnoses, and the original sign-out diagnoses are available as an attachment to this publication. Conclusion The results indicated that the image information contained in current whole slide images is sufficient for pathologists to make reliable diagnostic decisions and compose complex diagnostic reports. This is a very positive result; however, this does not mean that WSI is as good as a microscope. Virtually every slide had focal areas in which image quality (focus

  2. Automated parasite faecal egg counting using fluorescence labelling, smartphone image capture and computational image analysis.

    Science.gov (United States)

    Slusarewicz, Paul; Pagano, Stefanie; Mills, Christopher; Popa, Gabriel; Chow, K Martin; Mendenhall, Michael; Rodgers, David W; Nielsen, Martin K

    2016-07-01

    Intestinal parasites are a concern in veterinary medicine worldwide and for human health in the developing world. Infections are identified by microscopic visualisation of parasite eggs in faeces, which is time-consuming, requires technical expertise and is impractical for use on-site. For these reasons, recommendations for parasite surveillance are not widely adopted and parasite control is based on administration of rote prophylactic treatments with anthelmintic drugs. This approach is known to promote anthelmintic resistance, so there is a pronounced need for a convenient egg counting assay to promote good clinical practice. Using a fluorescent chitin-binding protein, we show that this structural carbohydrate is present and accessible in shells of ova of strongyle, ascarid, trichurid and coccidian parasites. Furthermore, we show that a cellular smartphone can be used as an inexpensive device to image fluorescent eggs and, by harnessing the computational power of the phone, to perform image analysis to count the eggs. Strongyle egg counts generated by the smartphone system had a significant linear correlation with manual McMaster counts (R(2)=0.98), but with a significantly lower coefficient of variation (P=0.0177). Furthermore, the system was capable of differentiating equine strongyle and ascarid eggs similar to the McMaster method, but with significantly lower coefficients of variation (Psmartphones as relatively sophisticated, inexpensive and portable medical diagnostic devices. PMID:27025771

  3. Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images.

    Directory of Open Access Journals (Sweden)

    Anna Kreshuk

    Full Text Available We describe a protocol for fully automated detection and segmentation of asymmetric, presumed excitatory, synapses in serial electron microscopy images of the adult mammalian cerebral cortex, taken with the focused ion beam, scanning electron microscope (FIB/SEM. The procedure is based on interactive machine learning and only requires a few labeled synapses for training. The statistical learning is performed on geometrical features of 3D neighborhoods of each voxel and can fully exploit the high z-resolution of the data. On a quantitative validation dataset of 111 synapses in 409 images of 1948×1342 pixels with manual annotations by three independent experts the error rate of the algorithm was found to be comparable to that of the experts (0.92 recall at 0.89 precision. Our software offers a convenient interface for labeling the training data and the possibility to visualize and proofread the results in 3D. The source code, the test dataset and the ground truth annotation are freely available on the website http://www.ilastik.org/synapse-detection.

  4. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    Science.gov (United States)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  5. Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays

    Directory of Open Access Journals (Sweden)

    Domenico F. Galati

    2016-01-01

    Full Text Available Multi-ciliated cells (MCCs use polarized fields of undulating cilia (ciliary array to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs that are arranged within a spatially complex 3-dimensional geometry (3D. Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs.

  6. Machine Learning Approach to Automated Quality Identification of Human Induced Pluripotent Stem Cell Colony Images.

    Science.gov (United States)

    Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti

    2016-01-01

    The focus of this research is on automated identification of the quality of human induced pluripotent stem cell (iPSC) colony images. iPS cell technology is a contemporary method by which the patient's cells are reprogrammed back to stem cells and are differentiated to any cell type wanted. iPS cell technology will be used in future to patient specific drug screening, disease modeling, and tissue repairing, for instance. However, there are technical challenges before iPS cell technology can be used in practice and one of them is quality control of growing iPSC colonies which is currently done manually but is unfeasible solution in large-scale cultures. The monitoring problem returns to image analysis and classification problem. In this paper, we tackle this problem using machine learning methods such as multiclass Support Vector Machines and several baseline methods together with Scaled Invariant Feature Transformation based features. We perform over 80 test arrangements and do a thorough parameter value search. The best accuracy (62.4%) for classification was obtained by using a k-NN classifier showing improved accuracy compared to earlier studies. PMID:27493680

  7. Machine Learning Approach to Automated Quality Identification of Human Induced Pluripotent Stem Cell Colony Images

    Science.gov (United States)

    Haponen, Markus; Rasku, Jyrki

    2016-01-01

    The focus of this research is on automated identification of the quality of human induced pluripotent stem cell (iPSC) colony images. iPS cell technology is a contemporary method by which the patient's cells are reprogrammed back to stem cells and are differentiated to any cell type wanted. iPS cell technology will be used in future to patient specific drug screening, disease modeling, and tissue repairing, for instance. However, there are technical challenges before iPS cell technology can be used in practice and one of them is quality control of growing iPSC colonies which is currently done manually but is unfeasible solution in large-scale cultures. The monitoring problem returns to image analysis and classification problem. In this paper, we tackle this problem using machine learning methods such as multiclass Support Vector Machines and several baseline methods together with Scaled Invariant Feature Transformation based features. We perform over 80 test arrangements and do a thorough parameter value search. The best accuracy (62.4%) for classification was obtained by using a k-NN classifier showing improved accuracy compared to earlier studies. PMID:27493680

  8. Automated Waterline Detection in the Wadden Sea Using High-Resolution TerraSAR-X Images

    Directory of Open Access Journals (Sweden)

    Stefan Wiehle

    2015-01-01

    Full Text Available We present an algorithm for automatic detection of the land-water-line from TerraSAR-X images acquired over the Wadden Sea. In this coastal region of the southeastern North Sea, a strip of up to 20 km of seabed falls dry during low tide, revealing mudflats and tidal creeks. The tidal currents transport sediments and can change the coastal shape with erosion rates of several meters per month. This rate can be strongly increased by storm surges which also cause flooding of usually dry areas. Due to the high number of ships traveling through the Wadden Sea to the largest ports of Germany, frequent monitoring of the bathymetry is also an important task for maritime security. For such an extended area and the required short intervals of a few months, only remote sensing methods can perform this task efficiently. Automating the waterline detection in weather-independent radar images provides a fast and reliable way to spot changes in the coastal topography. The presented algorithm first performs smoothing, brightness thresholding, and edge detection. In the second step, edge drawing and flood filling are iteratively performed to determine optimal thresholds for the edge drawing. In the last step, small misdetections are removed.

  9. Comparison of manual direct and automated indirect measurement of hippocampus using magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Giesel, Frederik L. [Department of Radiology, German Cancer Research Center (Germany); MRI Unit, Department of Radiology, Sheffield (United Kingdom)], E-mail: f.giesel@dkfz.de; Thomann, Philipp A. [Section of Geriatric Psychiatry, University of Heidelberg (Germany); Hahn, Horst K. [MeVis, Bremen (Germany); Politi, Maria [Neuroradiology, Homburg/Saar (Germany); Stieltjes, Bram; Weber, Marc-Andre [Department of Radiology, German Cancer Research Center (Germany); Pantel, Johannes [Department of Psychiatry, University of Frankfurt (Germany); Wilkinson, I.D.; Griffiths, Paul D. [MRI Unit, Department of Radiology, Sheffield (United Kingdom); Schroeder, Johannes [Section of Geriatric Psychiatry, University of Heidelberg (Germany); Essig, Marco [Department of Radiology, German Cancer Research Center (Germany)

    2008-05-15

    Purpose: Objective quantification of brain structure can aid diagnosis and therapeutic monitoring in several neuropsychiatric disorders. In this study, we aimed to compare direct and indirect quantification approaches for hippocampal formation changes in patients with mild cognitive impairment and Alzheimer's disease (AD). Methods and materials: Twenty-one healthy volunteers (mean age: 66.2), 21 patients with mild cognitive impairment (mean age: 66.6), and 10 patients with AD (mean age: 65.1) were enrolled. All subjects underwent extensive neuropsychological testing and were imaged at 1.5 T (Vision, Siemens, Germany; T1w coronal TR = 4 ms, Flip = 13 deg., FOV = 250 mm, Matrix = 256 x 256, 128 contiguous slices, 1.8 mm). Direct measurement of the hippocampal formation was performed on coronal slices using a standardized protocol, while indirect temporal horn volume (THV) was calculated using a watershed algorithm-based software package (MeVis, Germany). Manual tracing took about 30 min, semi-automated measurement less than 3 min time. Results: Successful direct and indirect quantification was performed in all subjects. A significant volume difference was found between controls and AD patients (p < 0.001) with both the manual and the semi-automated approach. Group analysis showed a slight but not significant decrease of hippocampal volume and increase in temporal horn volume (THV) for subjects with mild cognitive impairment compared to volunteers (p < 0.07). A significant correlation (p < 0.001) of direct and indirect measurement was found. Conclusion: The presented indirect approach for hippocampus volumetry is equivalent to the direct approach and offers the advantages of observer independency, time reduction and thus usefulness for clinical routine.

  10. Automated coronary artery calcification detection on low-dose chest CT images

    Science.gov (United States)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  11. Comparison of Automated Image-Based Grain Sizing to Standard Pebble Count Methods

    Science.gov (United States)

    Strom, K. B.

    2009-12-01

    This study explores the use of an automated, image-based method for characterizing grain-size distributions (GSDs) of exposed, open-framework gravel beds. This was done by comparing the GSDs measured with an image-based method to distributions obtained with two pebble-count methods. Selection of grains for the two pebble-count methods was carried out using a gridded sampling frame and the heel-to-toe Wolman walk method at six field sites. At each site, 500-partcle pebble-count samples were collected with each of the two pebble-count methods and digital images were systematically collected over the same sampling area. For the methods used, the pebble counts collected with the gridded sampling frame were assumed to be the most accurate representations of the true grain-size population, and results from the image-based method were compared to the grid derived GSDs for accuracy estimates; comparisons between the grid and Wolman walk methods were conducted to give an indication of possible variation between commonly used methods for each particular field site. Comparison of grain sizes were made at two spatial scales. At the larger scale, results from the image-based method were integrated over the sampling area required to collect the 500-particle pebble-count samples. At the smaller sampling scale, the image derived GSDs were compared to those from 100-particle, pebble-count samples obtained with the gridded sampling frame. The comparisons show that the image-based method performed reasonably well on five of the six study sites. For those five sites, the image-based method slightly underestimate all grain-size percentiles relative to the pebble counts collected with the gridded sampling frame. The average bias for Ψ5, Ψ50, and Ψ95 between the image and grid count methods at the larger sampling scale was 0.07Ψ, 0.04Ψ, and 0.19Ψ respectively; at the smaller sampling scale the average bias was 0.004Ψ, 0.03Ψ, and 0.18Ψ respectively. The average bias between the

  12. An automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seong Hoon; Seo, Joon Beom; Kim, Nam Kug; Lee, Young Kyung; Kim, Song Soo; Chae, Eun Jin [University of Ulsan, College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, June Goo [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2007-07-15

    To develop an automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images, and to evaluate the accuracy and usefulness of the system. For textural analysis, histogram features, gradient features, run length encoding, and a co-occurrence matrix were employed. A Bayesian classifier was used for automated classification. The images (image number n = 256) were selected from the HRCT images obtained from 17 healthy subjects (n = 67), 26 patients with bronchiolitis obliterans (n = 70), 28 patients with mild centrilobular emphysema (n = 65), and 21 patients with panlobular emphysema or severe centrilobular emphysema (n = 63). An five-fold cross-validation method was used to assess the performance of the system. Class-specific sensitivities were analyzed and the overall accuracy of the system was assessed with kappa statistics. The sensitivity of the system for each class was as follows: normal lung 84.9%, bronchiolitis obliterans 83.8%, mild centrilobular emphysema 77.0%, and panlobular emphysema or severe centrilobular emphysema 95.8%. The overall performance for differentiating each disease and the normal lung was satisfactory with a kappa value of 0.779. An automated classification system for the differentiation between obstructive lung diseases based on the textural analysis of HRCT images was developed. The proposed system discriminates well between the various obstructive lung diseases and the normal lung.

  13. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    Science.gov (United States)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then

  14. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    Science.gov (United States)

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (Source-code for MATLAB and ImageJ is freely available under a permissive open-source license.

  15. Automated gas bubble imaging at sea floor – a new method of in situ gas flux quantification

    Directory of Open Access Journals (Sweden)

    K. Thomanek

    2010-02-01

    Full Text Available Photo-optical systems are common in marine sciences and have been extensively used in coastal and deep-sea research. However, due to technical limitations in the past photo images had to be processed manually or semi-automatically. Recent advances in technology have rapidly improved image recording, storage and processing capabilities which are used in a new concept of automated in situ gas quantification by photo-optical detection. The design for an in situ high-speed image acquisition and automated data processing system is reported ("Bubblemeter". New strategies have been followed with regards to back-light illumination, bubble extraction, automated image processing and data management. This paper presents the design of the novel method, its validation procedures and calibration experiments. The system will be positioned and recovered from the sea floor using a remotely operated vehicle (ROV. It is able to measure bubble flux rates up to 10 L/min with a maximum error of 33% for worst case conditions. The Bubblemeter has been successfully deployed at a water depth of 1023 m at the Makran accretionary prism offshore Pakistan during a research expedition with R/V Meteor in November 2007.

  16. Precision automation of cell type classification and sub-cellular fluorescence quantification from laser scanning confocal images

    Directory of Open Access Journals (Sweden)

    Hardy Craig Hall

    2016-02-01

    Full Text Available While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to 1 segment radial plant organs into individual cells, 2 classify cells into cell type categories based upon random forest classification, 3 divide each cell into sub-regions, and 4 quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.

  17. Experimental saltwater intrusion in coastal aquifers using automated image analysis: Applications to homogeneous aquifers

    Science.gov (United States)

    Robinson, G.; Ahmed, Ashraf A.; Hamill, G. A.

    2016-07-01

    This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimising manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.

  18. Open-Source Automated Parahydrogen Hyperpolarizer for Molecular Imaging Using (13)C Metabolic Contrast Agents.

    Science.gov (United States)

    Coffey, Aaron M; Shchepin, Roman V; Truong, Milton L; Wilkens, Ken; Pham, Wellington; Chekmenev, Eduard Y

    2016-08-16

    An open-source hyperpolarizer producing (13)C hyperpolarized contrast agents using parahydrogen induced polarization (PHIP) for biomedical and other applications is presented. This PHIP hyperpolarizer utilizes an Arduino microcontroller in conjunction with a readily modified graphical user interface written in the open-source processing software environment to completely control the PHIP hyperpolarization process including remotely triggering an NMR spectrometer for efficient production of payloads of hyperpolarized contrast agent and in situ quality assurance of the produced hyperpolarization. Key advantages of this hyperpolarizer include: (i) use of open-source software and hardware seamlessly allowing for replication and further improvement as well as readily customizable integration with other NMR spectrometers or MRI scanners (i.e., this is a multiplatform design), (ii) relatively low cost and robustness, and (iii) in situ detection capability and complete automation. The device performance is demonstrated by production of a dose (∼2-3 mL) of hyperpolarized (13)C-succinate with %P13C ∼ 28% and 30 mM concentration and (13)C-phospholactate at %P13C ∼ 15% and 25 mM concentration in aqueous medium. These contrast agents are used for ultrafast molecular imaging and spectroscopy at 4.7 and 0.0475 T. In particular, the conversion of hyperpolarized (13)C-phospholactate to (13)C-lactate in vivo is used here to demonstrate the feasibility of ultrafast multislice (13)C MRI after tail vein injection of hyperpolarized (13)C-phospholactate in mice. PMID:27478927

  19. Automated MALDI matrix deposition method with inkjet printing for imaging mass spectrometry.

    Science.gov (United States)

    Baluya, Dodge L; Garrett, Timothy J; Yost, Richard A

    2007-09-01

    Careful matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is critical for producing reproducible analyte ion signals. Traditional methods for matrix deposition are often considered an art rather than a science, with significant sample-to-sample variability. Here we report an automated method for matrix deposition, employing a desktop inkjet printer (printer tray, designed to hold CDs and DVDs, was modified to hold microscope slides. Empty ink cartridges were filled with MALDI matrix solutions, including DHB in methanol/water (70:30) at concentrations up to 40 mg/mL. Various samples (including rat brain tissue sections and standards of small drug molecules) were prepared using three deposition methods (electrospray, airbrush, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed that matrix crystals were formed evenly across the sample. There was minimal background signal after storing the matrix in the cartridges over a 6-month period. Overall, the mass spectral images gathered from inkjet-printed tissue specimens were of better quality and more reproducible than from specimens prepared by the electrospray and airbrush methods.

  20. Automated MALDI Matrix Coating System for Multiple Tissue Samples for Imaging Mass Spectrometry

    Science.gov (United States)

    Mounfield, William P.; Garrett, Timothy J.

    2012-03-01

    Uniform matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is key for reproducible analyte ion signals. Current methods often result in nonhomogenous matrix deposition, and take time and effort to produce acceptable ion signals. Here we describe a fully-automated method for matrix deposition using an enclosed spray chamber and spray nozzle for matrix solution delivery. A commercial air-atomizing spray nozzle was modified and combined with solenoid controlled valves and a Programmable Logic Controller (PLC) to control and deliver the matrix solution. A spray chamber was employed to contain the nozzle, sample, and atomized matrix solution stream, and to prevent any interference from outside conditions as well as allow complete control of the sample environment. A gravity cup was filled with MALDI matrix solutions, including DHB in chloroform/methanol (50:50) at concentrations up to 60 mg/mL. Various samples (including rat brain tissue sections) were prepared using two deposition methods (spray chamber, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed a uniform coating of matrix crystals across the sample. Overall, the mass spectral images gathered from tissues coated using the spray chamber system were of better quality and more reproducible than from tissue specimens prepared by the inkjet deposition method.

  1. Computerized method for automated measurement of thickness of cerebral cortex for 3-D MR images

    Science.gov (United States)

    Arimura, Hidetaka; Yoshiura, Takashi; Kumazawa, Seiji; Koga, Hiroshi; Sakai, Shuji; Mihara, Futoshi; Honda, Hiroshi; Ohki, Masafumi; Toyofuku, Fukai; Higashida, Yoshiharu

    2006-03-01

    Alzheimer's disease (AD) is associated with the degeneration of cerebral cortex, which results in focal volume change or thinning in the cerebral cortex in magnetic resonance imaging (MRI). Therefore, the measurement of the cortical thickness is important for detection of the atrophy related to AD. Our purpose was to develop a computerized method for automated measurement of the cortical thickness for three-dimensional (3-D) MRI. The cortical thickness was measured with normal vectors from white matter surface to cortical gray matter surface on a voxel-by-voxel basis. First, a head region was segmented by use of an automatic thresholding technique, and then the head region was separated into the cranium region and brain region by means of a multiple gray level thresholding with monitoring the ratio of the first maximum volume to the second one. Next, a fine white matter region was determined based on a level set method as a seed region of the rough white matter region extracted from the brain region. Finally, the cortical thickness was measured by extending normal vectors from the white matter surface to gray matter surface (brain surface) on a voxel-by-voxel basis. We applied the computerized method to high-resolution 3-D T1-weighted images of the whole brains from 7 clinically diagnosed AD patients and 8 healthy subjects. The average cortical thicknesses in the upper slices for AD patients were thinner than those for non-AD subjects, whereas the average cortical thicknesses in the lower slices for most AD patients were slightly thinner. Our preliminary results suggest that the MRI-based computerized measurement of gray matter atrophy is promising for detecting AD.

  2. Automated Synthesis of 18F-Fluoropropoxytryptophan for Amino Acid Transporter System Imaging

    Directory of Open Access Journals (Sweden)

    I-Hong Shih

    2014-01-01

    Full Text Available Objective. This study was to develop a cGMP grade of [18F]fluoropropoxytryptophan (18F-FTP to assess tryptophan transporters using an automated synthesizer. Methods. Tosylpropoxytryptophan (Ts-TP was reacted with K18F/kryptofix complex. After column purification, solvent evaporation, and hydrolysis, the identity and purity of the product were validated by radio-TLC (1M-ammonium acetate : methanol = 4 : 1 and HPLC (C-18 column, methanol : water = 7 : 3 analyses. In vitro cellular uptake of 18F-FTP and 18F-FDG was performed in human prostate cancer cells. PET imaging studies were performed with 18F-FTP and 18F-FDG in prostate and small cell lung tumor-bearing mice (3.7 MBq/mouse, iv. Results. Radio-TLC and HPLC analyses of 18F-FTP showed that the Rf and Rt values were 0.9 and 9 min, respectively. Radiochemical purity was >99%. The radiochemical yield was 37.7% (EOS 90 min, decay corrected. Cellular uptake of 18F-FTP and 18F-FDG showed enhanced uptake as a function of incubation time. PET imaging studies showed that 18F-FTP had less tumor uptake than 18F-FDG in prostate cancer model. However, 18F-FTP had more uptake than 18F-FDG in small cell lung cancer model. Conclusion. 18F-FTP could be synthesized with high radiochemical yield. Assessment of upregulated transporters activity by 18F-FTP may provide potential applications in differential diagnosis and prediction of early treatment response.

  3. Automated collection of imaging and phenotypic data to centralized and distributed data repositories

    Directory of Open Access Journals (Sweden)

    Margaret D King

    2014-06-01

    Full Text Available Accurate data collection at the ground level is vital to the integrity of neuroimaging research. Similarly important is the ability to connect and curate data in order to make it meaningful and sharable with other investigators. Collecting data, especially with several different modalities, can be time consuming and expensive. These issues have driven the development of automated collection of neuroimaging and clinical assessment data within COINS (Collaborative Informatics and Neuroimaging Suite. COINS is an end-to-end data management system. It provides a comprehensive platform for data collection, management, secure storage, and flexible data retrieval (Bockholt et al., 2010, Scott et al., 2011. Self Assessment (SAis an application embedded in the Assessment Manager tool in the COINS. It is an innovative tool that allows participants to fill out assessments via the web-based Participant Portal. It eliminates the need for paper collection and data entry by allowing participants to submit their assessments directly to COINS. After a queue has been created for the participant, they can access the Participant Portal via the internet to fill out their assessments. This allows them the flexibility to participate from home, a library, on site, etc. The collected data is stored in a PostgresSQL database at the Mind Research Network behind a firewall to protect sensitive data. An added benefit to using COINS is the ability to collect, store and share imaging data and assessment data with no interaction with outside tools or programs. All study data collected (imaging and assessment are stored and exported with a participant's unique subject identifier so there is no need to keep extra spreadsheets or databases to link and keep track of the data. There is a great need for data collection tools that limit human intervention and error. COINS aims to be a leader in database solutions for research studies collecting data from several different modalities

  4. Automated synthesis of novel cell death imaging tracer 18F-FPDuramycin

    International Nuclear Information System (INIS)

    Background: The noninvasive imaging of cell death plays an important role in the evaluation of degenerative diseases and detection of tumor treatments. Duramycin, a peptide with 19-amino acid, is produced by Streptoverticillium cinnamoneus. It binds specifically to phosphatidylethanolamine (PE), a novel molecular target for cell death. Purpose: The aim is to develop a synthetic method to label duramycin using 18F ion. The automated synthesis was carried out by multi-step procedure on the modified PET-MF-2V-IT-I synthesizer. Methods: Firstly, the prosthetic group of 4-nitrophenyl 2-[18F]fluoropropionate (18F-NFP) was automatically synthesized by a convenient three-step procedure. Secondly, 18F-FPDuramycin was synthesized by conjunction of 18F-NFP with duramycin, which was purified by a solid-phase extraction cartridge. Orthogonal test was performed to confirm the suitable reaction conditions (solvent, base and temperature). Results: The radiochemical yields of 18F-NFP were (25±5)% (n=10, decay-uncorrected) based on[18F]fluoride in 80 min. 18F-FPDuramycin was obtained with yield of (70±3)% (n=8, decay-uncorrected) based on 18F-NFP within 20 min. The radiochemical purity of 18F-FPDuramycin was greater than 99% and the specific activity was greater than (23.7±13.7) GBq·μmol-1 (n=10). Conclusion: 18F-FPDuramycin injection is easy to be prepared with 'two-pot reaction' and is a promising radiotracer used for the clinical and scientific study on positron emission tomography (PET) imaging. (authors)

  5. Quantification of Eosinophilic Granule Protein Deposition in Biopsies of Inflammatory Skin Diseases by Automated Image Analysis of Highly Sensitive Immunostaining

    Directory of Open Access Journals (Sweden)

    Peter Kiehl

    1999-01-01

    Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.

  6. Computer-aided method for automated selection of optimal imaging plane for measurement of total cerebral blood flow by MRI

    Science.gov (United States)

    Teng, Pang-yu; Bagci, Ahmet Murat; Alperin, Noam

    2009-02-01

    A computer-aided method for finding an optimal imaging plane for simultaneous measurement of the arterial blood inflow through the 4 vessels leading blood to the brain by phase contrast magnetic resonance imaging is presented. The method performance is compared with manual selection by two observers. The skeletons of the 4 vessels for which centerlines are generated are first extracted. Then, a global direction of the relatively less curved internal carotid arteries is calculated to determine the main flow direction. This is then used as a reference direction to identify segments of the vertebral arteries that strongly deviates from the main flow direction. These segments are then used to identify anatomical landmarks for improved consistency of the imaging plane selection. An optimal imaging plane is then identified by finding a plane with the smallest error value, which is defined as the sum of the angles between the plane's normal and the vessel centerline's direction at the location of the intersections. Error values obtained using the automated and the manual methods were then compared using 9 magnetic resonance angiography (MRA) data sets. The automated method considerably outperformed the manual selection. The mean error value with the automated method was significantly lower than the manual method, 0.09+/-0.07 vs. 0.53+/-0.45, respectively (p<.0001, Student's t-test). Reproducibility of repeated measurements was analyzed using Bland and Altman's test, the mean 95% limits of agreements for the automated and manual method were 0.01~0.02 and 0.43~0.55 respectively.

  7. Automated image analysis of alveolar expansion patterns in immature newborn rabbits treated with natural or artificial surfactant.

    OpenAIRE

    Halliday, H; Robertson, B.; Nilsson, R.; Rigaut, J. P.; Grossmann, G.

    1987-01-01

    Automated image analysis of histological lung sections was used to compare the efficacy of an artificial surfactant (dipalmitoylphosphatidylcholine + high-density lipoprotein, 10:1) and a natural surfactant (the phospholipid fraction of porcine surfactant, isolated by liquid-gel chromatography in ventilated immature newborn rabbits delivered after 27 days' gestation. Tidal volumes were significantly improved in each group treated with surfactant when compared with controls, but natural surfac...

  8. Development and application of an automated analysis method for individual cerebral perfusion single photon emission tomography images

    CERN Document Server

    Cluckie, A J

    2001-01-01

    Neurological images may be analysed by performing voxel by voxel comparisons with a group of control subject images. An automated, 3D, voxel-based method has been developed for the analysis of individual single photon emission tomography (SPET) scans. Clusters of voxels are identified that represent regions of abnormal radiopharmaceutical uptake. Morphological operators are applied to reduce noise in the clusters, then quantitative estimates of the size and degree of the radiopharmaceutical uptake abnormalities are derived. Statistical inference has been performed using a Monte Carlo method that has not previously been applied to SPET scans, or for the analysis of individual images. This has been validated for group comparisons of SPET scans and for the analysis of an individual image using comparison with a group. Accurate statistical inference was obtained independent of experimental factors such as degrees of freedom, image smoothing and voxel significance level threshold. The analysis method has been eval...

  9. LOCALIZATION OF PALM DORSAL VEIN PATTERN USING IMAGE PROCESSING FOR AUTOMATED INTRA-VENOUS DRUG NEEDLE INSERTION

    Directory of Open Access Journals (Sweden)

    Mrs. Kavitha. R,

    2011-06-01

    Full Text Available Vein pattern in palms is a random mesh of interconnected and inter- wining blood vessels. This project is the application of vein detection concept to automate the drug delivery process. It dealswith extracting palm dorsal vein structures, which is a key procedure for selecting the optimal drug needle insertion point. Gray scale images obtained from a low cost IR-webcam are poor in contrast, and usually noisy which make an effective vein segmentation a great challenge. Here a new vein image segmentation method is introduced, based on enhancement techniques resolves the conflict between poor contrast vein image and good quality image segmentation. Gaussian filter is used to remove the high frequency noise in the image. The ultimate goal is to identify venous bifurcations and determine the insertion point for the needle in between their branches.

  10. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Menten, Martin J., E-mail: martin.menten@icr.ac.uk; Fast, Martin F.; Nill, Simeon; Oelfke, Uwe, E-mail: uwe.oelfke@icr.ac.uk [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom)

    2015-12-15

    Purpose: Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. Methods: kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated by weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Results: Regular dual-energy imaging was able to increase tracking accuracy in left–right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. Conclusions: This study has highlighted the influence of

  11. Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2011-11-01

    Full Text Available The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina, as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp., using a camera deployed in Saanich Inlet (103 m depth. For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters with Euclidean Distances (ED on Red-Green-Blue (RGB channels. The Scale-Invariant Feature Transform (SIFT features and Fourier Descriptors (FD of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA on Mean RGB (RGBv value for each object and Fourier Descriptors (RGBv+FD matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent

  12. A Novel Automated High-Content Analysis Workflow Capturing Cell Population Dynamics from Induced Pluripotent Stem Cell Live Imaging Data

    Science.gov (United States)

    Kerz, Maximilian; Folarin, Amos; Meleckyte, Ruta; Watt, Fiona M.; Dobson, Richard J.; Danovi, Davide

    2016-01-01

    Most image analysis pipelines rely on multiple channels per image with subcellular reference points for cell segmentation. Single-channel phase-contrast images are often problematic, especially for cells with unfavorable morphology, such as induced pluripotent stem cells (iPSCs). Live imaging poses a further challenge, because of the introduction of the dimension of time. Evaluations cannot be easily integrated with other biological data sets including analysis of endpoint images. Here, we present a workflow that incorporates a novel CellProfiler-based image analysis pipeline enabling segmentation of single-channel images with a robust R-based software solution to reduce the dimension of time to a single data point. These two packages combined allow robust segmentation of iPSCs solely on phase-contrast single-channel images and enable live imaging data to be easily integrated to endpoint data sets while retaining the dynamics of cellular responses. The described workflow facilitates characterization of the response of live-imaged iPSCs to external stimuli and definition of cell line–specific, phenotypic signatures. We present an efficient tool set for automated high-content analysis suitable for cells with challenging morphology. This approach has potentially widespread applications for human pluripotent stem cells and other cell types. PMID:27256155

  13. Note: An automated image analysis method for high-throughput classification of surface-bound bacterial cell motions.

    Science.gov (United States)

    Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng

    2015-12-01

    We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion. PMID:26724085

  14. Application of Reflectance Transformation Imaging Technique to Improve Automated Edge Detection in a Fossilized Oyster Reef

    Science.gov (United States)

    Djuricic, Ana; Puttonen, Eetu; Harzhauser, Mathias; Dorninger, Peter; Székely, Balázs; Mandic, Oleg; Nothegger, Clemens; Molnár, Gábor; Pfeifer, Norbert

    2016-04-01

    The world's largest fossilized oyster reef is located in Stetten, Lower Austria excavated during field campaigns of the Natural History Museum Vienna between 2005 and 2008. It is studied in paleontology to learn about change in climate from past events. In order to support this study, a laser scanning and photogrammetric campaign was organized in 2014 for 3D documentation of the large and complex site. The 3D point clouds and high resolution images from this field campaign are visualized by photogrammetric methods in form of digital surface models (DSM, 1 mm resolution) and orthophoto (0.5 mm resolution) to help paleontological interpretation of data. Due to size of the reef, automated analysis techniques are needed to interpret all digital data obtained from the field. One of the key components in successful automation is detection of oyster shell edges. We have tested Reflectance Transformation Imaging (RTI) to visualize the reef data sets for end-users through a cultural heritage viewing interface (RTIViewer). The implementation includes a Lambert shading method to visualize DSMs derived from terrestrial laser scanning using scientific software OPALS. In contrast to shaded RTI no devices consisting of a hardware system with LED lights, or a body to rotate the light source around the object are needed. The gray value for a given shaded pixel is related to the angle between light source and the normal at that position. Brighter values correspond to the slope surfaces facing the light source. Increasing of zenith angle results in internal shading all over the reef surface. In total, oyster reef surface contains 81 DSMs with 3 m x 2 m each. Their surface was illuminated by moving the virtual sun every 30 degrees (12 azimuth angles from 20-350) and every 20 degrees (4 zenith angles from 20-80). This technique provides paleontologists an interactive approach to virtually inspect the oyster reef, and to interpret the shell surface by changing the light source direction

  15. LeafJ: an ImageJ plugin for semi-automated leaf shape measurement.

    Science.gov (United States)

    Maloof, Julin N; Nozue, Kazunari; Mumbach, Maxwell R; Palmer, Christine M

    2013-01-01

    High throughput phenotyping (phenomics) is a powerful tool for linking genes to their functions (see review and recent examples). Leaves are the primary photosynthetic organ, and their size and shape vary developmentally and environmentally within a plant. For these reasons studies on leaf morphology require measurement of multiple parameters from numerous leaves, which is best done by semi-automated phenomics tools. Canopy shade is an important environmental cue that affects plant architecture and life history; the suite of responses is collectively called the shade avoidance syndrome (SAS). Among SAS responses, shade induced leaf petiole elongation and changes in blade area are particularly useful as indices. To date, leaf shape programs (e.g. SHAPE, LAMINA, LeafAnalyzer, LEAFPROCESSOR) can measure leaf outlines and categorize leaf shapes, but can not output petiole length. Lack of large-scale measurement systems of leaf petioles has inhibited phenomics approaches to SAS research. In this paper, we describe a newly developed ImageJ plugin, called LeafJ, which can rapidly measure petiole length and leaf blade parameters of the model plant Arabidopsis thaliana. For the occasional leaf that required manual correction of the petiole/leaf blade boundary we used a touch-screen tablet. Further, leaf cell shape and leaf cell numbers are important determinants of leaf size. Separate from LeafJ we also present a protocol for using a touch-screen tablet for measuring cell shape, area, and size. Our leaf trait measurement system is not limited to shade-avoidance research and will accelerate leaf phenotyping of many mutants and screening plants by leaf phenotyping. PMID:23380664

  16. A rapid and automated relocation method of an AFM probe for high-resolution imaging

    Science.gov (United States)

    Zhou, Peilin; Yu, Haibo; Shi, Jialin; Jiao, Niandong; Wang, Zhidong; Wang, Yuechao; Liu, Lianqing

    2016-09-01

    The atomic force microscope (AFM) is one of the most powerful tools for high-resolution imaging and high-precision positioning for nanomanipulation. The selection of the scanning area of the AFM depends on the use of the optical microscope. However, the resolution of an optical microscope is generally no larger than 200 nm owing to wavelength limitations of visible light. Taking into consideration the two determinants of relocation—relative angular rotation and positional offset between the AFM probe and nano target—it is therefore extremely challenging to precisely relocate the AFM probe to the initial scan/manipulation area for the same nano target after the AFM probe has been replaced, or after the sample has been moved. In this paper, we investigate a rapid automated relocation method for the nano target of an AFM using a coordinate transformation. The relocation process is both simple and rapid; moreover, multiple nano targets can be relocated by only identifying a pair of reference points. It possesses a centimeter-scale location range and nano-scale precision. The main advantages of this method are that it overcomes the limitations associated with the resolution of optical microscopes, and that it is label-free on the target areas, which means that it does not require the use of special artificial markers on the target sample areas. Relocation experiments using nanospheres, DNA, SWCNTs, and nano patterns amply demonstrate the practicality and efficiency of the proposed method, which provides technical support for mass nanomanipulation and detection based on AFM for multiple nano targets that are widely distributed in a large area.

  17. Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.; Virden, Daniel J.; Myers, Joshua R.; Maxwell, Adam R.

    2012-09-01

    Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objects recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.

  18. Automated 3D quantitative assessment and measurement of alpha angles from the femoral head-neck junction using MR imaging

    International Nuclear Information System (INIS)

    To develop an automated approach for 3D quantitative assessment and measurement of alpha angles from the femoral head-neck (FHN) junction using bone models derived from magnetic resonance (MR) images of the hip joint.Bilateral MR images of the hip joints were acquired from 30 male volunteers (healthy active individuals and high-performance athletes, aged 18–49 years) using a water-excited 3D dual echo steady state (DESS) sequence. In a subset of these subjects (18 water-polo players), additional True Fast Imaging with Steady-state Precession (TrueFISP) images were acquired from the right hip joint. For both MR image sets, an active shape model based algorithm was used to generate automated 3D bone reconstructions of the proximal femur. Subsequently, a local coordinate system of the femur was constructed to compute a 2D shape map to project femoral head sphericity for calculation of alpha angles around the FHN junction. To evaluate automated alpha angle measures, manual analyses were performed on anterosuperior and anterior radial MR slices from the FHN junction that were automatically reformatted using the constructed coordinate system.High intra- and inter-rater reliability (intra-class correlation coefficients  >  0.95) was found for manual alpha angle measurements from the auto-extracted anterosuperior and anterior radial slices. Strong correlations were observed between manual and automatic measures of alpha angles for anterosuperior (r  =  0.84) and anterior (r  =  0.92) FHN positions. For matched DESS and TrueFISP images, there were no significant differences between automated alpha angle measures obtained from the upper anterior quadrant of the FHN junction (two-way repeated measures ANOVA, F  <  0.01, p  =  0.98).Our automatic 3D method analysed MR images of the hip joints to generate alpha angle measures around the FHN junction circumference with very good reliability and reproducibility. This work has the

  19. Automation of a high-speed imaging setup for differential viscosity measurements

    International Nuclear Information System (INIS)

    We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have been reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an “unknown” solution of hydroxyethyl cellulose

  20. SU-C-304-04: A Compact Modular Computational Platform for Automated On-Board Imager Quality Assurance

    International Nuclear Information System (INIS)

    Purpose: Traditionally, the assessment of X-ray tube output and detector positioning accuracy of on-board imagers (OBI) has been performed manually and subjectively with rulers and dosimeters, and typically takes hours to complete. In this study, we have designed a compact modular computational platform to automatically analyze OBI images acquired with in-house designed phantoms as an efficient and robust surrogate. Methods: The platform was developed as an integrated and automated image analysis-based platform using MATLAB for easy modification and maintenance. Given a set of images acquired with the in-house designed phantoms, the X-ray output accuracy was examined via cross-validation of the uniqueness and integration minimization of important image quality assessment metrics, while machine geometric and positioning accuracy were validated by utilizing pattern-recognition based image analysis techniques. Results: The platform input was a set of images of an in-house designed phantom. The total processing time is about 1–2 minutes. Based on the data acquired from three Varian Truebeam machines over the course of 3 months, the designed test validation strategy achieved higher accuracy than traditional methods. The kVp output accuracy can be verified within +/−2 kVp, the exposure accuracy within 2%, and exposure linearity with a coefficient of variation (CV) of 0.1. Sub-millimeter position accuracy was achieved for the lateral and longitudinal positioning tests, while vertical positioning accuracy within +/−2 mm was achieved. Conclusion: This new platform delivers to the radiotherapy field an automated, efficient, and stable image analysis-based procedure, for the first time, acting as a surrogate for traditional tests for LINAC OBI systems. It has great potential to facilitate OBI quality assurance (QA) with the assistance of advanced image processing techniques. In addition, it provides flexible integration of additional tests for expediting other OBI

  1. SU-C-304-04: A Compact Modular Computational Platform for Automated On-Board Imager Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Dolly, S [Washington University School of Medicine, Saint Louis, MO (United States); University of Missouri, Columbia, MO (United States); Cai, B; Chen, H; Anastasio, M; Sun, B; Yaddanapudi, S; Noel, C; Goddu, S; Mutic, S; Li, H [Washington University School of Medicine, Saint Louis, MO (United States); Tan, J [UTSouthwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Traditionally, the assessment of X-ray tube output and detector positioning accuracy of on-board imagers (OBI) has been performed manually and subjectively with rulers and dosimeters, and typically takes hours to complete. In this study, we have designed a compact modular computational platform to automatically analyze OBI images acquired with in-house designed phantoms as an efficient and robust surrogate. Methods: The platform was developed as an integrated and automated image analysis-based platform using MATLAB for easy modification and maintenance. Given a set of images acquired with the in-house designed phantoms, the X-ray output accuracy was examined via cross-validation of the uniqueness and integration minimization of important image quality assessment metrics, while machine geometric and positioning accuracy were validated by utilizing pattern-recognition based image analysis techniques. Results: The platform input was a set of images of an in-house designed phantom. The total processing time is about 1–2 minutes. Based on the data acquired from three Varian Truebeam machines over the course of 3 months, the designed test validation strategy achieved higher accuracy than traditional methods. The kVp output accuracy can be verified within +/−2 kVp, the exposure accuracy within 2%, and exposure linearity with a coefficient of variation (CV) of 0.1. Sub-millimeter position accuracy was achieved for the lateral and longitudinal positioning tests, while vertical positioning accuracy within +/−2 mm was achieved. Conclusion: This new platform delivers to the radiotherapy field an automated, efficient, and stable image analysis-based procedure, for the first time, acting as a surrogate for traditional tests for LINAC OBI systems. It has great potential to facilitate OBI quality assurance (QA) with the assistance of advanced image processing techniques. In addition, it provides flexible integration of additional tests for expediting other OBI

  2. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    Science.gov (United States)

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. PMID:24037521

  3. Automated 3D quantitative assessment and measurement of alpha angles from the femoral head-neck junction using MR imaging

    Science.gov (United States)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Walker, Duncan; Crozier, Stuart; Engstrom, Craig

    2015-10-01

    To develop an automated approach for 3D quantitative assessment and measurement of alpha angles from the femoral head-neck (FHN) junction using bone models derived from magnetic resonance (MR) images of the hip joint. Bilateral MR images of the hip joints were acquired from 30 male volunteers (healthy active individuals and high-performance athletes, aged 18-49 years) using a water-excited 3D dual echo steady state (DESS) sequence. In a subset of these subjects (18 water-polo players), additional True Fast Imaging with Steady-state Precession (TrueFISP) images were acquired from the right hip joint. For both MR image sets, an active shape model based algorithm was used to generate automated 3D bone reconstructions of the proximal femur. Subsequently, a local coordinate system of the femur was constructed to compute a 2D shape map to project femoral head sphericity for calculation of alpha angles around the FHN junction. To evaluate automated alpha angle measures, manual analyses were performed on anterosuperior and anterior radial MR slices from the FHN junction that were automatically reformatted using the constructed coordinate system. High intra- and inter-rater reliability (intra-class correlation coefficients  >  0.95) was found for manual alpha angle measurements from the auto-extracted anterosuperior and anterior radial slices. Strong correlations were observed between manual and automatic measures of alpha angles for anterosuperior (r  =  0.84) and anterior (r  =  0.92) FHN positions. For matched DESS and TrueFISP images, there were no significant differences between automated alpha angle measures obtained from the upper anterior quadrant of the FHN junction (two-way repeated measures ANOVA, F  angle measures around the FHN junction circumference with very good reliability and reproducibility. This work has the potential to improve analyses of cam-type lesions of the FHN junction for large-scale morphometric and clinical MR

  4. Automated image analysis to quantify the subnuclear organization of transcriptional coregulatory protein complexes in living cell populations

    Science.gov (United States)

    Voss, Ty C.; Demarco, Ignacio A.; Booker, Cynthia F.; Day, Richard N.

    2004-06-01

    Regulated gene transcription is dependent on the steady-state concentration of DNA-binding and coregulatory proteins assembled in distinct regions of the cell nucleus. For example, several different transcriptional coactivator proteins, such as the Glucocorticoid Receptor Interacting Protein (GRIP), localize to distinct spherical intranuclear bodies that vary from approximately 0.2-1 micron in diameter. We are using multi-spectral wide-field microscopy of cells expressing coregulatory proteins labeled with the fluorescent proteins (FP) to study the mechanisms that control the assembly and distribution of these structures in living cells. However, variability between cells in the population makes an unbiased and consistent approach to this image analysis absolutely critical. To address this challenge, we developed a protocol for rigorous quantification of subnuclear organization in cell populations. Cells transiently co-expressing a green FP (GFP)-GRIP and the monomeric red FP (mRFP) are selected for imaging based only on the signal in the red channel, eliminating bias due to knowledge of coregulator organization. The impartially selected images of the GFP-coregulatory protein are then analyzed using an automated algorithm to objectively identify and measure the intranuclear bodies. By integrating all these features, this combination of unbiased image acquisition and automated analysis facilitates the precise and consistent measurement of thousands of protein bodies from hundreds of individual living cells that represent the population.

  5. An automated four-point scale scoring of segmental wall motion in echocardiography using quantified parametric images

    International Nuclear Information System (INIS)

    The aim of this paper is to develop an automated method which operates on echocardiographic dynamic loops for classifying the left ventricular regional wall motion (RWM) in a four-point scale. A non-selected group of 37 patients (2 and 4 chamber views) was studied. Each view was segmented according to the standardized segmentation using three manually positioned anatomical landmarks (the apex and the angles of the mitral annulus). The segmented data were analyzed by two independent experienced echocardiographists and the consensual RWM scores were used as a reference for comparisons. A fast and automatic parametric imaging method was used to compute and display as static color-coded parametric images both temporal and motion information contained in left ventricular dynamic echocardiograms. The amplitude and time parametric images were provided to a cardiologist for visual analysis of RWM and used for RWM quantification. A cross-validation method was applied to the segmental quantitative indices for classifying RWM in a four-point scale. A total of 518 segments were analyzed. Comparison between visual interpretation of parametric images and the reference reading resulted in an absolute agreement (Aa) of 66% and a relative agreement (Ra) of 96% and kappa (κ) coefficient of 0.61. Comparison of the automated RWM scoring against the same reference provided Aa = 64%, Ra = 96% and κ = 0.64 on the validation subset. Finally, linear regression analysis between the global quantitative index and global reference scores as well as ejection fraction resulted in correlations of 0.85 and 0.79. A new automated four-point scale scoring of RWM was developed and tested in a non-selected database. Its comparison against a consensual visual reading of dynamic echocardiograms showed its ability to classify RWM abnormalities.

  6. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    International Nuclear Information System (INIS)

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  7. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A., E-mail: willi.kalender@imp.uni-erlangen.de [Institute of Medical Physics, University of Erlangen-Nürnberg, Henkestraße 91, 91052 Erlangen, Germany and CT Imaging GmbH, 91052 Erlangen (Germany)

    2014-03-15

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  8. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    methodology for volume determinations (maximal error 6.3%). Preceded by the determination of reproducibility and the optimal threshold at the available MR unit, automated 'threshold' segmentation appears to be acceptable when changes rather than absolute values of synovial membrane volumes are most important......Automated fast (5-20 min) synovial membrane volume determination by MRI, based on pre-set post-gadolinium-DTPA enhancement thresholds, was evaluated as a substitute for a time-consuming (45-120 min), previously validated, manual segmentation method. Twenty-nine knees [rheumatoid arthritis (RA) 13......, osteoarthritis (OA) 16] and 17 RA wrists were examined. At enhancement thresholds between 30 and 60%, the automated volumes (Syn(x%)) were highly significantly correlated to manual volumes (SynMan) (knees: rho = 0.78-0.91, P

  9. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    methodology for volume determinations (maximal error 6.3%). Preceded by the determination of reproducibility and the optimal threshold at the available MR unit, automated 'threshold' segmentation appears to be acceptable when changes rather than absolute values of synovial membrane volumes are most important......, osteoarthritis (OA) 16] and 17 RA wrists were examined. At enhancement thresholds between 30 and 60%, the automated volumes (Syn(x%)) were highly significantly correlated to manual volumes (SynMan) (knees: rho = 0.78-0.91, P < 10(-5) to < 10(-9); wrists: rho = 0.87-0.95, P < 10(-4) to < 10(-6)). The absolute...... values of the automated estimates were extremely dependent on the threshold chosen. At the optimal threshold of 45%, the median numerical difference from SynMan was 7 ml (17%) in knees and 2 ml (25%) in wrists. At this threshold, the difference was not related to diagnosis, clinical inflammation or...

  10. Use of solid film highlighter in automation of D sight image interpretation

    Science.gov (United States)

    Forsyth, David S.; Komorowski, Jerzy P.; Gould, Ronald W.

    1998-03-01

    Many studies have shown inspector variability to be a crucial parameter in nondestructive evaluation (NDE) reliability. Therefore it is desirable to automate the decision making process in NDE as much as possible. The automation of inspection data handling and interpretation will also enable use of data fusion algorithms currently being researched at IAR for increasing inspection reliability by combination of different NDE modes. Enhanced visual inspection techniques such as D Sight have the capability to rapidly inspect lap splice joints using D Sight and other optical methods. IARs NDI analysis software has been sued to perform analysis and feature extraction on D Sight inspections. Different metrics suitable for automated interpretation have been developed and tested on inspections of actual service-retired aircraft specimens using D Sight with solid film highlighter.

  11. Automated grading of left ventricular segmental wall motion by an artificial neural network using color kinesis images

    Directory of Open Access Journals (Sweden)

    L.O. Murta Jr.

    2006-01-01

    Full Text Available The present study describes an auxiliary tool in the diagnosis of left ventricular (LV segmental wall motion (WM abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN was developed and validated for grading LV segmental WM using data from color kinesis (CK images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1 normal, 2 mild hypokinesia, 3 moderate hypokinesia, 4 severe hypokinesia, 5 akinesia, and 6 dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R² = 0.99. In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.

  12. Automated grading of left ventricular segmental wall motion by an artificial neural network using color kinesis images.

    Science.gov (United States)

    Murta, L O; Ruiz, E E S; Pazin-Filho, A; Schmidt, A; Almeida-Filho, O C; Simões, M V; Marin-Neto, J A; Maciel, B C

    2006-01-01

    The present study describes an auxiliary tool in the diagnosis of left ventricular (LV) segmental wall motion (WM) abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN) was developed and validated for grading LV segmental WM using data from color kinesis (CK) images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1) normal, 2) mild hypokinesia, 3) moderate hypokinesia, 4) severe hypokinesia, 5) akinesia, and 6) dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R2 = 0.99). In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.

  13. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    Science.gov (United States)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  14. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries. PMID:24808857

  15. Automated collection of imaging and phenotypic data to centralized and distributed data repositories.

    Science.gov (United States)

    King, Margaret D; Wood, Dylan; Miller, Brittny; Kelly, Ross; Landis, Drew; Courtney, William; Wang, Runtang; Turner, Jessica A; Calhoun, Vince D

    2014-01-01

    Accurate data collection at the ground level is vital to the integrity of neuroimaging research. Similarly important is the ability to connect and curate data in order to make it meaningful and sharable with other investigators. Collecting data, especially with several different modalities, can be time consuming and expensive. These issues have driven the development of automated collection of neuroimaging and clinical assessment data within COINS (Collaborative Informatics and Neuroimaging Suite). COINS is an end-to-end data management system. It provides a comprehensive platform for data collection, management, secure storage, and flexible data retrieval (Bockholt et al., 2010; Scott et al., 2011). It was initially developed for the investigators at the Mind Research Network (MRN), but is now available to neuroimaging institutions worldwide. Self Assessment (SA) is an application embedded in the Assessment Manager (ASMT) tool in COINS. It is an innovative tool that allows participants to fill out assessments via the web-based Participant Portal. It eliminates the need for paper collection and data entry by allowing participants to submit their assessments directly to COINS. Instruments (surveys) are created through ASMT and include many unique question types and associated SA features that can be implemented to help the flow of assessment administration. SA provides an instrument queuing system with an easy-to-use drag and drop interface for research staff to set up participants' queues. After a queue has been created for the participant, they can access the Participant Portal via the internet to fill out their assessments. This allows them the flexibility to participate from home, a library, on site, etc. The collected data is stored in a PostgresSQL database at MRN. This data is only accessible by users that have explicit permission to access the data through their COINS user accounts and access to MRN network. This allows for high volume data collection and

  16. Automated collection of imaging and phenotypic data to centralized and distributed data repositories.

    Science.gov (United States)

    King, Margaret D; Wood, Dylan; Miller, Brittny; Kelly, Ross; Landis, Drew; Courtney, William; Wang, Runtang; Turner, Jessica A; Calhoun, Vince D

    2014-01-01

    Accurate data collection at the ground level is vital to the integrity of neuroimaging research. Similarly important is the ability to connect and curate data in order to make it meaningful and sharable with other investigators. Collecting data, especially with several different modalities, can be time consuming and expensive. These issues have driven the development of automated collection of neuroimaging and clinical assessment data within COINS (Collaborative Informatics and Neuroimaging Suite). COINS is an end-to-end data management system. It provides a comprehensive platform for data collection, management, secure storage, and flexible data retrieval (Bockholt et al., 2010; Scott et al., 2011). It was initially developed for the investigators at the Mind Research Network (MRN), but is now available to neuroimaging institutions worldwide. Self Assessment (SA) is an application embedded in the Assessment Manager (ASMT) tool in COINS. It is an innovative tool that allows participants to fill out assessments via the web-based Participant Portal. It eliminates the need for paper collection and data entry by allowing participants to submit their assessments directly to COINS. Instruments (surveys) are created through ASMT and include many unique question types and associated SA features that can be implemented to help the flow of assessment administration. SA provides an instrument queuing system with an easy-to-use drag and drop interface for research staff to set up participants' queues. After a queue has been created for the participant, they can access the Participant Portal via the internet to fill out their assessments. This allows them the flexibility to participate from home, a library, on site, etc. The collected data is stored in a PostgresSQL database at MRN. This data is only accessible by users that have explicit permission to access the data through their COINS user accounts and access to MRN network. This allows for high volume data collection and

  17. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P;

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of fou...

  18. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer

    NARCIS (Netherlands)

    L. Bondar (Luiza); M.S. Hoogeman (Mischa); W. Schillemans; B.J.M. Heijmen (Ben)

    2013-01-01

    textabstractFor online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and

  19. Time efficiency and diagnostic accuracy of new automated myocardial perfusion analysis software in 320-row CT cardiac imaging

    International Nuclear Information System (INIS)

    We aimed to evaluate the time efficiency and diagnostic accuracy of automated myocardial computed tomography perfusion (CTP) image analysis software. 320-row CTP was performed in 30 patients, and analyses were conducted independently by three different blinded readers by the use of two recent software releases (version 4.6 and novel version 4.71GR001, Toshiba, Tokyo, Japan). Analysis times were compared, and automated epi- and endocardial contour detection was subjectively rated in five categories (excellent, good, fair, poor and very poor). As semi-quantitative perfusion parameters, myocardial attenuation and transmural perfusion ratio (TPR) were calculated for each myocardial segment and agreement was tested by using the intraclass correlation coefficient (ICC). Conventional coronary angiography served as reference standard. The analysis time was significantly reduced with the novel automated software version as compared with the former release (Reader 1: 43:08 ± 11:39 min vs. 09:47 ± 04:51 min, Reader 2: 42:07 ± 06:44 min vs. 09:42 ± 02:50 min and Reader 3: 21:38 ± 3:44 min vs. 07:34 ± 02:12 min; p < 0.001 for all). Epi- and endocardial contour detection for the novel software was rated to be significantly better (p < 0.001) than with the former software. ICCs demonstrated strong agreement (≥ 0.75) for myocardial attenuation in 93% and for TPR in 82%. Diagnostic accuracy for the two software versions was not significantly different (p 0.169) as compared with conventional coronary angiography. The novel automated CTP analysis software offers enhanced time efficiency with an improvement by a factor of about four, while maintaining diagnostic accuracy.

  20. Automated estimation of progression of interstitial lung disease in CT images.

    NARCIS (Netherlands)

    Arzhaeva, Y.; Prokop, M.; Murphy, K.; Rikxoort, E.M. van; Jong, P.A. de; Gietema, H.A.; Viergever, M.A.; Ginneken, B. van

    2010-01-01

    PURPOSE: A system is presented for automated estimation of progression of interstitial lung disease in serial thoracic CT scans. METHODS: The system compares corresponding 2D axial sections from baseline and follow-up scans and concludes whether this pair of sections represents regression, progressi

  1. RootAnalyzer: A Cross-Section Image Analysis Tool for Automated Characterization of Root Cells and Tissues.

    Directory of Open Access Journals (Sweden)

    Joshua Chopin

    Full Text Available The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processing techniques such as local thresholding and nearest neighbor identification, RootAnalyzer segments the plant root from the image's background, classifies and characterizes the cortex, stele, endodermis and epidermis, and subsequently produces statistics about the morphological properties of the root cells and tissues. We use RootAnalyzer to analyze 15 images of wheat plants and one maize plant image and evaluate its performance against manually-obtained ground truth data. The comparison shows that RootAnalyzer can fully characterize most root tissue regions with over 90% accuracy.

  2. RootAnalyzer: A Cross-Section Image Analysis Tool for Automated Characterization of Root Cells and Tissues.

    Science.gov (United States)

    Chopin, Joshua; Laga, Hamid; Huang, Chun Yuan; Heuer, Sigrid; Miklavcic, Stanley J

    2015-01-01

    The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processing techniques such as local thresholding and nearest neighbor identification, RootAnalyzer segments the plant root from the image's background, classifies and characterizes the cortex, stele, endodermis and epidermis, and subsequently produces statistics about the morphological properties of the root cells and tissues. We use RootAnalyzer to analyze 15 images of wheat plants and one maize plant image and evaluate its performance against manually-obtained ground truth data. The comparison shows that RootAnalyzer can fully characterize most root tissue regions with over 90% accuracy.

  3. Automated segmentation of thyroid gland on CT images with multi-atlas label fusion and random classification forest

    Science.gov (United States)

    Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald

    2015-03-01

    The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.

  4. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis

    Institute of Scientific and Technical Information of China (English)

    Lian Yanyun; Song Zhijian

    2014-01-01

    Background Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning,treatment planning,monitoring of therapy.However,manual tumor segmentation commonly used in clinic is time-consuming and challenging,and none of the existed automated methods are highly robust,reliable and efficient in clinic application.An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results.Methods Based on the symmetry of human brain,we employed sliding-window technique and correlation coefficient to locate the tumor position.At first,the image to be segmented was normalized,rotated,denoised,and bisected.Subsequently,through vertical and horizontal sliding-windows technique in turn,that is,two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image,along with calculating of correlation coefficient of two windows,two windows with minimal correlation coefficient were obtained,and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor.At last,the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length,and threshold segmentation and morphological operations were used to acquire the final tumor region.Results The method was evaluated on 3D FSPGR brain MR images of 10 patients.As a result,the average ratio of correct location was 93.4% for 575 slices containing tumor,the average Dice similarity coefficient was 0.77 for one scan,and the average time spent on one scan was 40 seconds.Conclusions An fully automated,simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use.Correlation coefficient is a new and effective feature for tumor

  5. Comparison of the automated evaluation of phantom mama in digital and digitalized images; Comparacao da avaliacao automatizada do phantom mama em imagens digitais e digitalizadas

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Priscila do Carmo, E-mail: pcs@cdtn.b [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear. Programa de Pos-Graduacao em Ciencias e Tecnicas Nucleares; Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Fac. de Medicina. Dept. de Propedeutica Complementar; Gomes, Danielle Soares; Oliveira, Marcio Alves; Nogueira, Maria do Socorro, E-mail: mnogue@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    Mammography is an essential tool for diagnosis and early detection of breast cancer if it is provided as a very good quality service. The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. This work compare the automated methodology for the evaluation of scanned digital images the phantom mama. By applied the DIP method techniques was possible determine geometrical and radiometric images evaluated. The evaluated parameters include circular details of low contrast, contrast ratio, spatial resolution, tumor masses, optical density and background in Phantom Mama scanned and digitized images. The both results of images were evaluated. Through this comparison was possible to demonstrate that this automated methodology is presented as a promising alternative for the reduction or elimination of subjectivity in both types of images, but the Phantom Mama present insufficient parameters for spatial resolution evaluation. (author)

  6. Automated detection of spinal centrelines, vertebral bodies and intervertebral discs in CT and MR images of lumbar spine

    Science.gov (United States)

    Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2010-01-01

    We propose a completely automated algorithm for the detection of the spinal centreline and the centres of vertebral bodies and intervertebral discs in images acquired by computed tomography (CT) and magnetic resonance (MR) imaging. The developed methods are based on the analysis of the geometry of spinal structures and the characteristics of CT and MR images and were evaluated on 29 CT and 13 MR images of lumbar spine. The overall mean distance between the obtained and the ground truth spinal centrelines and centres of vertebral bodies and intervertebral discs were 1.8 ± 1.1 mm and 2.8 ± 1.9 mm, respectively, and no considerable differences were detected among the results for CT, T1-weighted MR and T2-weighted MR images. The knowledge of the location of the spinal centreline and the centres of vertebral bodies and intervertebral discs is valuable for the analysis of the spine. The proposed method may therefore be used to initialize the techniques for labelling and segmentation of vertebrae.

  7. Automated detection of retinal cell nuclei in 3D micro-CT images of zebrafish using support vector machine classification

    Science.gov (United States)

    Ding, Yifu; Tavolara, Thomas; Cheng, Keith

    2016-03-01

    Our group is developing a method to examine biological specimens in cellular detail using synchrotron microCT. The method can acquire 3D images of tissue at micrometer-scale resolutions, allowing for individual cell types to be visualized in the context of the entire specimen. For model organism research, this tool will enable the rapid characterization of tissue architecture and cellular morphology from every organ system. This characterization is critical for proposed and ongoing "phenome" projects that aim to phenotype whole-organism mutants and diseased tissues from different organisms including humans. With the envisioned collection of hundreds to thousands of images for a phenome project, it is important to develop quantitative image analysis tools for the automated scoring of organism phenotypes across organ systems. Here we present a first step towards that goal, demonstrating the use of support vector machines (SVM) in detecting retinal cell nuclei in 3D images of wild-type zebrafish. In addition, we apply the SVM classifier on a mutant zebrafish to examine whether SVMs can be used to capture phenotypic differences in these images. The longterm goal of this work is to allow cellular and tissue morphology to be characterized quantitatively for many organ systems, at the level of the whole-organism.

  8. LeafJ: An ImageJ Plugin for Semi-automated Leaf Shape Measurement

    OpenAIRE

    Maloof, Julin N.; Nozue, Kazunari; Mumbach, Maxwell R.; Palmer, Christine M.

    2013-01-01

    High throughput phenotyping (phenomics) is a powerful tool for linking genes to their functions (see review1 and recent examples2-4). Leaves are the primary photosynthetic organ, and their size and shape vary developmentally and environmentally within a plant. For these reasons studies on leaf morphology require measurement of multiple parameters from numerous leaves, which is best done by semi-automated phenomics tools5,6. Canopy shade is an important environmental cue that affects plant arc...

  9. Automated determination of the centers of vertebral bodies and intervertebral discs in CT and MR lumbar spine images

    Science.gov (United States)

    Štern, Darko; Vrtovec, Tomaž; Pernuš, Franjo; Likar, Boštjan

    2010-03-01

    The knowledge of the location of the centers of vertebral bodies and intervertebral discs is valuable for the analysis of the spine. Existing methods for the detection and segmentation of vertebrae in images acquired by computed tomography (CT) and magnetic resonance (MR) imaging are usually applicable only to a specific image modality and require prior knowledge of the location of vertebrae, usually obtained by manual identification or statistical modeling. We propose a completely automated framework for the detection of the centers of vertebral bodies and intervertebral discs in CT and MR images. The image intensity and gradient magnitude profiles are first extracted in each image along the already obtained spinal centerline and therefore contain a repeating pattern representing the vertebral bodies and intervertebral discs. Based on the period of the repeating pattern and by using a function that approximates the shape of the vertebral body, a model of the vertebral body is generated. The centers of vertebral bodies and intervertebral discs are detected by measuring the similarity between the generated model and the extracted profiles. The method was evaluated on 29 CT and 13 MR images of lumbar spine with varying number of vertebrae. The overall mean distance between the obtained and the ground truth centers was 2.8 +/- 1.9 mm, and no considerable differences were detected between the results for CT, T1-weighted MR or T2-weighted MR images, or among different vertebrae. The proposed method may therefore be valuable for initializing the techniques for the detection and segmentation of vertebrae.

  10. Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry.

    Science.gov (United States)

    Lacson, Ronilda; Harris, Kimberly; Brawarsky, Phyllis; Tosteson, Tor D; Onega, Tracy; Tosteson, Anna N A; Kaye, Abby; Gonzalez, Irina; Birdwell, Robyn; Haas, Jennifer S

    2015-10-01

    Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute's Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a "gold standard" based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6% of screening mammograms, 12.1% of diagnostic mammograms, and 9.4% of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. PMID:25561069

  11. Automated kidney morphology measurements from ultrasound images using texture and edge analysis

    Science.gov (United States)

    Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin

    2016-04-01

    In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.

  12. Automated systemic-cognitive analysis of images pixels (generalization, abstraction, classification and identification

    Directory of Open Access Journals (Sweden)

    Lutsenko Y. V.

    2015-09-01

    Full Text Available In the article the application of systemic-cognitive analysis and its mathematical model i.e. the system theory of the information and its program toolkit which is "Eidos" system for loading images from graphics files, synthesis of the generalized images of classes, their abstraction, classification of the generalized images (clusters and constructs comparisons of concrete images with the generalized images (identification are examined. We suggest using the theory of information for processing the data and its size for every pixel which indicates that the image is of a certain class. A numerical example is given in which on the basis of a number of specific examples of images belonging to different classes, forming generalized images of these classes, independent of their specific implementations, i.e., the "Eidoses" of these images (in the definition of Plato – the prototypes or archetypes of images (in the definition of Jung. But the "Eidos" system provides not only the formation of prototype images, which quantitatively reflects the amount of information in the elements of specific images on their belonging to a particular proto-types, but a comparison of specific images with generic (identification and the generalization of pictures images with each other (classification

  13. Contrast-enhanced magnetic resonance angiography in carotid artery disease: does automated image registration improve image quality?

    Energy Technology Data Exchange (ETDEWEB)

    Menke, Jan [University Hospital, Department of Diagnostic Radiology, Goettingen (Germany); Larsen, Joerg [Braunschweig Teaching Hospitals, Institute for Roentgendiagnostics, Braunschweig (Germany)

    2009-05-15

    Contrast-enhanced magnetic resonance angiography (MRA) is a noninvasive imaging alternative to digital subtraction angiography (DSA) for patients with carotid artery disease. In DSA, image quality can be improved by shifting the mask image if the patient has moved during angiography. This study investigated whether such image registration may also help to improve the image quality of carotid MRA. Data from 370 carotid MRA examinations of patients likely to have carotid artery disease were prospectively collected. The standard nonregistered MRAs were compared to automatically linear, affine and warp registered MRA by using three image quality parameters: the vessel detection probability (VDP) in maximum intensity projection (MIP) images, contrast-to-noise ratio (CNR) in MIP images, and contrast-to-noise ratio in three-dimensional image volumes. A body shift of less than 1 mm occurred in 96.2% of cases. Analysis of variance revealed no significant influence of image registration and body shift on image quality (p > 0.05). In conclusion, standard contrast-enhanced carotid MRA usually requires no image registration to improve image quality and is generally robust against any naturally occurring body shift. (orig.)

  14. Automated CAD for Nodule Detection for Magnetic Resonance Image Contrast Enhancement

    Directory of Open Access Journals (Sweden)

    K.R.Ananth

    2011-05-01

    Full Text Available Contrast is a measure of the variation in intensity or gray value in a specified region of an image. In all applications concerning image acquisition, followed by processing of images, successful pre-processing is of the essence. Every sensor has its own characteristics, but in general the quality of the acquired MR image is fairly poor. Overall grayscale intensity variations, poor contrast and noisy background are the frequently encountered issues. The Rational Unsharp Masking method is the one introduced here to improve the quality of the MR image. It is demonstrated, that the proposed method has much reduced noise sensitivity than another polynomial operator, Cubic Unsharp masking and number of approaches devised to improve the perceived quality of an image. This algorithm has been tested for various slices of axial, sagittal and coronal sections of MR image. The results confirm the ability of the algorithm to produce better quality images, helpful to have effective diagnosis.

  15. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  16. IHC Profiler: An Open Source Plugin for the Quantitative Evaluation and Automated Scoring of Immunohistochemistry Images of Human Tissue Samples

    Science.gov (United States)

    Malhotra, Renu; De, Abhijit

    2014-01-01

    In anatomic pathology, immunohistochemistry (IHC) serves as a diagnostic and prognostic method for identification of disease markers in tissue samples that directly influences classification and grading the disease, influencing patient management. However, till today over most of the world, pathological analysis of tissue samples remained a time-consuming and subjective procedure, wherein the intensity of antibody staining is manually judged and thus scoring decision is directly influenced by visual bias. This instigated us to design a simple method of automated digital IHC image analysis algorithm for an unbiased, quantitative assessment of antibody staining intensity in tissue sections. As a first step, we adopted the spectral deconvolution method of DAB/hematoxylin color spectra by using optimized optical density vectors of the color deconvolution plugin for proper separation of the DAB color spectra. Then the DAB stained image is displayed in a new window wherein it undergoes pixel-by-pixel analysis, and displays the full profile along with its scoring decision. Based on the mathematical formula conceptualized, the algorithm is thoroughly tested by analyzing scores assigned to thousands (n = 1703) of DAB stained IHC images including sample images taken from human protein atlas web resource. The IHC Profiler plugin developed is compatible with the open resource digital image analysis software, ImageJ, which creates a pixel-by-pixel analysis profile of a digital IHC image and further assigns a score in a four tier system. A comparison study between manual pathological analysis and IHC Profiler resolved in a match of 88.6% (P<0.0001, CI = 95%). This new tool developed for clinical histopathological sample analysis can be adopted globally for scoring most protein targets where the marker protein expression is of cytoplasmic and/or nuclear type. We foresee that this method will minimize the problem of inter-observer variations across labs and further help in

  17. Fully Automated On-Chip Imaging Flow Cytometry System with Disposable Contamination-Free Plastic Re-Cultivation Chip

    Directory of Open Access Journals (Sweden)

    Tomoyuki Kaneko

    2011-06-01

    Full Text Available We have developed a novel imaging cytometry system using a poly(methyl methacrylate (PMMA based microfluidic chip. The system was contamination-free, because sample suspensions contacted only with a flammable PMMA chip and no other component of the system. The transparency and low-fluorescence of PMMA was suitable for microscopic imaging of cells flowing through microchannels on the chip. Sample particles flowing through microchannels on the chip were discriminated by an image-recognition unit with a high-speed camera in real time at the rate of 200 event/s, e.g., microparticles 2.5 μm and 3.0 μm in diameter were differentiated with an error rate of less than 2%. Desired cells were separated automatically from other cells by electrophoretic or dielectrophoretic force one by one with a separation efficiency of 90%. Cells in suspension with fluorescent dye were separated using the same kind of microfluidic chip. Sample of 5 μL with 1 × 106 particle/mL was processed within 40 min. Separated cells could be cultured on the microfluidic chip without contamination. The whole operation of sample handling was automated using 3D micropipetting system. These results showed that the novel imaging flow cytometry system is practically applicable for biological research and clinical diagnostics.

  18. Structure tensor based automated detection of macular edema and central serous retinopathy using optical coherence tomography images.

    Science.gov (United States)

    Hassan, Bilal; Raja, Gulistan; Hassan, Taimur; Usman Akram, M

    2016-04-01

    Macular edema (ME) and central serous retinopathy (CSR) are two macular diseases that affect the central vision of a person if they are left untreated. Optical coherence tomography (OCT) imaging is the latest eye examination technique that shows a cross-sectional region of the retinal layers and that can be used to detect many retinal disorders in an early stage. Many researchers have done clinical studies on ME and CSR and reported significant findings in macular OCT scans. However, this paper proposes an automated method for the classification of ME and CSR from OCT images using a support vector machine (SVM) classifier. Five distinct features (three based on the thickness profiles of the sub-retinal layers and two based on cyst fluids within the sub-retinal layers) are extracted from 30 labeled images (10 ME, 10 CSR, and 10 healthy), and SVM is trained on these. We applied our proposed algorithm on 90 time-domain OCT (TD-OCT) images (30 ME, 30 CSR, 30 healthy) of 73 patients. Our algorithm correctly classified 88 out of 90 subjects with accuracy, sensitivity, and specificity of 97.77%, 100%, and 93.33%, respectively. PMID:27140751

  19. HEIDI: An Automated Process for the Identification and Extraction of Photometric Light Curves from Astronomical Images

    CERN Document Server

    Todd, M; Tanga, P; Coward, D M; Zadnik, M G

    2014-01-01

    The production of photometric light curves from astronomical images is a very time-consuming task. Larger data sets improve the resolution of the light curve, however, the time requirement scales with data volume. The data analysis is often made more difficult by factors such as a lack of suitable calibration sources and the need to correct for variations in observing conditions from one image to another. Often these variations are unpredictable and corrections are based on experience and intuition. The High Efficiency Image Detection & Identification (HEIDI) pipeline software rapidly processes sets of astronomical images. HEIDI automatically selects multiple sources for calibrating the images using an algorithm that provides a reliable means of correcting for variations between images in a time series. The algorithm takes into account that some sources may intrinsically vary on short time scales and excludes these from being used as calibration sources. HEIDI processes a set of images from an entire nigh...

  20. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); Medical Sciences/University of Tehran, Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran (Iran); Bidgoli, Javad H. [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); East Tehran Azad University, Department of Electrical and Computer Engineering, Tehran (Iran); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine, Geneva (Switzerland)

    2008-10-15

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map ({mu}map), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated {mu}maps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique

  1. Screening of subfertile men for testicular carcinoma in situ by an automated image analysis-based cytological test of the ejaculate

    DEFF Research Database (Denmark)

    Almstrup, K; Lippert, Marianne; Mogensen, Hanne O;

    2011-01-01

    and detected in ejaculates with specific CIS markers. We have built a high throughput framework involving automated immunocytochemical staining, scanning microscopy and in silico image analysis allowing automated detection and grading of CIS-like stained objects in semen samples. In this study, 1175 ejaculates...... from 765 subfertile men were tested using this framework. In 5/765 (0.65%) cases, CIS-like cells were identified in the ejaculate. Three of these had bilateral testicular biopsies performed and CIS was histologically confirmed in two. In total, 63 bilateral testicular biopsy were performed...... a slightly lower sensitivity (0.51), possibly because of obstruction. We conclude that this novel non-invasive test combining automated immunocytochemistry and advanced image analysis allows identification of TC at the CIS stage with a high specificity, but a negative test does not completely exclude CIS...

  2. An automated multi-modal object analysis approach to coronary calcium scoring of adaptive heart isolated MSCT images

    Science.gov (United States)

    Wu, Jing; Ferns, Gordon; Giles, John; Lewis, Emma

    2012-02-01

    Inter- and intra- observer variability is a problem often faced when an expert or observer is tasked with assessing the severity of a disease. This issue is keenly felt in coronary calcium scoring of patients suffering from atherosclerosis where in clinical practice, the observer must identify firstly the presence, followed by the location of candidate calcified plaques found within the coronary arteries that may prevent oxygenated blood flow to the heart muscle. This can be challenging for a human observer as it is difficult to differentiate calcified plaques that are located in the coronary arteries from those found in surrounding anatomy such as the mitral valve or pericardium. The inclusion or exclusion of false positive or true positive calcified plaques respectively will alter the patient calcium score incorrectly, thus leading to the possibility of incorrect treatment prescription. In addition to the benefits to scoring accuracy, the use of fast, low dose multi-slice CT imaging to perform the cardiac scan is capable of acquiring the entire heart within a single breath hold. Thus exposing the patient to lower radiation dose, which for a progressive disease such as atherosclerosis where multiple scans may be required, is beneficial to their health. Presented here is a fully automated method for calcium scoring using both the traditional Agatston method, as well as the Volume scoring method. Elimination of the unwanted regions of the cardiac image slices such as lungs, ribs, and vertebrae is carried out using adaptive heart isolation. Such regions cannot contain calcified plaques but can be of a similar intensity and their removal will aid detection. Removal of both the ascending and descending aortas, as they contain clinical insignificant plaques, is necessary before the final calcium scores are calculated and examined against ground truth scores of three averaged expert observer results. The results presented here are intended to show the requirement and

  3. A fully automated multi-modal computer aided diagnosis approach to coronary calcium scoring of MSCT images

    Science.gov (United States)

    Wu, Jing; Ferns, Gordon; Giles, John; Lewis, Emma

    2012-03-01

    Inter- and intra- observer variability is a problem often faced when an expert or observer is tasked with assessing the severity of a disease. This issue is keenly felt in coronary calcium scoring of patients suffering from atherosclerosis where in clinical practice, the observer must identify firstly the presence, followed by the location of candidate calcified plaques found within the coronary arteries that may prevent oxygenated blood flow to the heart muscle. However, it can be difficult for a human observer to differentiate calcified plaques that are located in the coronary arteries from those found in surrounding anatomy such as the mitral valve or pericardium. In addition to the benefits to scoring accuracy, the use of fast, low dose multi-slice CT imaging to perform the cardiac scan is capable of acquiring the entire heart within a single breath hold. Thus exposing the patient to lower radiation dose, which for a progressive disease such as atherosclerosis where multiple scans may be required, is beneficial to their health. Presented here is a fully automated method for calcium scoring using both the traditional Agatston method, as well as the volume scoring method. Elimination of the unwanted regions of the cardiac image slices such as lungs, ribs, and vertebrae is carried out using adaptive heart isolation. Such regions cannot contain calcified plaques but can be of a similar intensity and their removal will aid detection. Removal of both the ascending and descending aortas, as they contain clinical insignificant plaques, is necessary before the final calcium scores are calculated and examined against ground truth scores of three averaged expert observer results. The results presented here are intended to show the feasibility and requirement for an automated scoring method to reduce the subjectivity and reproducibility error inherent with manual clinical calcium scoring.

  4. A computer-aided automated methodology for the detection and classification of occlusal caries from photographic color images.

    Science.gov (United States)

    Berdouses, Elias D; Koutsouri, Georgia D; Tripoliti, Evanthia E; Matsopoulos, George K; Oulis, Constantine J; Fotiadis, Dimitrios I

    2015-07-01

    The aim of this work is to present a computer-aided automated methodology for the assessment of carious lesions, according to the International Caries Detection and Assessment System (ICDAS II), which are located on the occlusal surfaces of posterior permanent teeth from photographic color tooth images. The proposed methodology consists of two stages: (a) the detection of regions of interest and (b) the classification of the detected regions according to ICDAS ΙΙ. In the first stage, pre-processing, segmentation and post-processing mechanisms were employed. For each pixel of the detected regions, a 15×15 neighborhood is used and a set of intensity-based and texture-based features were extracted. A correlation based technique was applied to select a subset of 36 features which were given as input into the classification stage, where five classifiers (J48, Random Tree, Random Forests, Support Vector Machines and Naïve Bayes) were compared to conclude to the best one, in our case, to Random Forests. The methodology was evaluated on a set of 103 digital color images where 425 regions of interest from occlusal surfaces of extracted permanent teeth were manually segmented and classified, based on visual assessments by two experts. The methodology correctly detected 337 out of 340 regions in the detection stage with accuracy of detection 80%. For the classification stage an overall accuracy 83% is achieved. The proposed methodology provides an objective and fully automated caries diagnostic system for occlusal carious lesions with similar or better performance of a trained dentist taking into consideration the available medical knowledge. PMID:25932969

  5. High Res at High Speed: Automated Delivery of High-Resolution Images from Digital Library Collections

    Science.gov (United States)

    Westbrook, R. Niccole; Watkins, Sean

    2012-01-01

    As primary source materials in the library are digitized and made available online, the focus of related library services is shifting to include new and innovative methods of digital delivery via social media, digital storytelling, and community-based and consortial image repositories. Most images on the Web are not of sufficient quality for most…

  6. Creating a virtual slide map from sputum smear images for region-of-interest localisation in automated microscopy.

    Science.gov (United States)

    Patel, Bhavin; Douglas, Tania S

    2012-10-01

    We address the location of regions-of-interest in previously scanned sputum smear slides requiring re-examination in automated microscopy for tuberculosis (TB) detection. We focus on the core component of microscope auto-positioning, which is to find a point of reference, position and orientation, on the slide so that it can be used to automatically bring desired fields to the field-of-view of the microscope. We use virtual slide maps together with geometric hashing to localise a query image, which then acts as the point of reference. The true positive rate achieved by the algorithm was above 88% even for noisy query images captured at slide orientations up to 26°. The image registration error, computed as the average mean square error, was less than 14 pixel² (corresponding to 1.02 μm²). The algorithm is inherently robust to changes in slide orientation and placement and showed high tolerance to illumination changes and robustness to noise.

  7. Automated identification of brain tumours from single MR images based on segmentation with refined patient-specific priors

    Directory of Open Access Journals (Sweden)

    Ana eSanjuán

    2013-12-01

    Full Text Available Brain tumours can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI procedure which enables brain tumour identification from single MR images. Our method rests on (A a modified segmentation-normalisation procedure with an explicit extra prior for the tumour and (B an outlier detection procedure for abnormal voxel (i.e. tumour classification. To minimise tissue misclassification, the segmentation-normalisation procedure requires prior information of the tumour location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers’ manual tracings. The automated procedure identified the tumours successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03. The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behaviour mapping studies, or when lesion identification and/or spatial normalisation are problematic.

  8. Automated segmentation of mammary gland regions in non-contrast torso CT images based on probabilistic atlas

    Science.gov (United States)

    Zhou, X.; Kan, M.; Hara, T.; Fujita, H.; Sugisaki, K.; Yokoyama, R.; Lee, G.; Hoshi, H.

    2007-03-01

    The identification of mammary gland regions is a necessary processing step during the anatomical structure recognition of human body and can be expected to provide the useful information for breast tumor diagnosis. This paper proposes a fully-automated scheme for segmenting the mammary gland regions in non-contrast torso CT images. This scheme calculates the probability for each voxel belonging to the mammary gland or other regions (for example pectoralis major muscles) in CT images and decides the mammary gland regions automatically. The probability is estimated from the location of the mammary gland and pectoralis major muscles in CT images. The location (named as a probabilistic atlas) is investigated from the pre-segmentation results in a number of different CT scans and the CT number distribution is approximated using a Gaussian function. We applied this scheme to 66 patient cases (female, age: 40-80) and evaluated the accuracy by using the coincidence rate between the segmented result and gold standard that is generated manually by a radiologist for each CT case. The mean value of the coincidence rate was 0.82 with the standard deviation of 0.09 for 66 CT cases.

  9. Analyzing the relevance of shape descriptors in automated recognition of facial gestures in 3D images

    Science.gov (United States)

    Rodriguez A., Julian S.; Prieto, Flavio

    2013-03-01

    The present document shows and explains the results from analyzing shape descriptors (DESIRE and Spherical Spin Image) for facial recognition of 3D images. DESIRE is a descriptor made of depth images, silhouettes and rays extended from a polygonal mesh; whereas the Spherical Spin Image (SSI) associated to a polygonal mesh point, is a 2D histogram built from neighboring points by using the position information that captures features of the local shape. The database used contains images of facial expressions which in average were recognized 88.16% using a neuronal network and 91.11% with a Bayesian classifier in the case of the first descriptor; in contrast, the second descriptor only recognizes in average 32% and 23,6% using the same mentioned classifiers respectively.

  10. A preliminary study for fully automated quantification of psoriasis severity using image mapping

    Science.gov (United States)

    Mukai, Kazuhiro; Iyatomi, Hitoshi

    2014-03-01

    Psoriasis is a common chronic skin disease and it detracts patients' QoL seriously. Since there is no known permanent cure so far, controlling appropriate disease condition is necessary and therefore quantification of its severity is important. In clinical, psoriasis area and severity index (PASI) is commonly used for abovementioned purpose, however it is often subjective and troublesome. A fully automatic computer-assisted area and severity index (CASI) was proposed to make an objective quantification of skin disease. It investigates the size and density of erythema based on digital image analysis, however it does not consider various inadequate effects caused by different geometrical conditions under clinical follow-up (i.e. variability in direction and distance between camera and patient). In this study, we proposed an image alignment method for clinical images and investigated to quantify the severity of psoriasis under clinical follow-up combined with the idea of CASI. The proposed method finds geometrical same points in patient's body (ROI) between images with Scale Invariant Feature Transform (SIFT) and performs the Affine transform to map the pixel value to the other. In this study, clinical images from 7 patients with psoriasis lesions on their trunk under clinical follow-up were used. In each series, our image alignment algorithm align images to the geometry of their first image. Our proposed method aligned images appropriately on visual assessment and confirmed that psoriasis areas were properly extracted using the approach of CASI. Although we cannot evaluate PASI and CASI directly due to their different definition of ROI, we confirmed that there is a large correlation between those scores with our image quantification method.

  11. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2016-07-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.   Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.   This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.   Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  12. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    Science.gov (United States)

    Pierce Ii, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-08

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.

  13. Automated image segmentation and registration of vessel wall MRI for quantitative assessment of carotid artery vessel wall dimensions and plaque composition

    NARCIS (Netherlands)

    Klooster, Ronald van 't

    2014-01-01

    The main goal of this thesis was to develop methods for automated segmentation, registration and classification of the carotid artery vessel wall and plaque components using multi-sequence MR vessel wall images to assess atherosclerosis. First, a general introduction into atherosclerosis and differe

  14. Detection of DNA Aneuploidy in Exfoliated Airway Epithelia Cells of Sputum Specimens by the Automated Image Cytometry and Its Clinical Value in the Identification of Lung Cancer

    Institute of Scientific and Technical Information of China (English)

    杨健; 周宜开

    2004-01-01

    To evaluate the value of detecton of DNA aneuploidy in exfoliated airway epithelia cells of sputum specimens by the automated image cytometry for the identification of lung cancer, 100patients were divided into patient group (50 patients with lung cancer)and control group (30 patients with tuberculosis and 20 healthy people). Sputum was obtained for the quantitative analysis of DNA content of exfoliated airway epithelial cells with the automated image cytometry, together with the examinations of brush cytology and conventional sputum cytology. Our results showed that DNA aneuploidy (DI>2.5 or 5c) was found in 20 out of 50 sputum samples of lung cancer, 1 out of 30 sputum samples from tuberculosis patients, and none of 20 sputum samples from healthy people. The positive rates of conventional sputum cytology and brush cytology were 16 % and 32 %,which was lower than that of DNA aneuploidy detection by the automated image cytometry (P<0.01 ,P>0.05). Our study showed that automated image cytometry, which uses DNA aneuploidy as a marker for tumor, can detect the malignant cells in sputum samples of lung cancer and it is a sensitive and specific method serving as a complement for the diagnosis of lung cancer.

  15. A Review of Fully Automated Techniques for Brain Tumor Detection From MR Images

    Directory of Open Access Journals (Sweden)

    Anjum Hayat Gondal

    2013-02-01

    Full Text Available Radiologists use medical images to diagnose diseases precisely. However, identification of brain tumor from medical images is still a critical and complicated job for a radiologist. Brain tumor identification form magnetic resonance imaging (MRI consists of several stages. Segmentation is known to be an essential step in medical imaging classification and analysis. Performing the brain MR images segmentation manually is a difficult task as there are several challenges associated with it. Radiologist and medical experts spend plenty of time for manually segmenting brain MR images, and this is a non-repeatable task. In view of this, an automatic segmentation of brain MR images is needed to correctly segment White Matter (WM, Gray Matter (GM and Cerebrospinal Fluid (CSF tissues of brain in a shorter span of time. The accurate segmentation is crucial as otherwise the wrong identification of disease can lead to severe consequences. Taking into account the aforesaid challenges, this research is focused towards highlighting the strengths and limitations of the earlier proposed segmentation techniques discussed in the contemporary literature. Besides summarizing the literature, the paper also provides a critical evaluation of the surveyed literature which reveals new facets of research. However, articulating a new technique is beyond the scope of this paper.

  16. An automated image-based tool for pupil plane characterization of EUVL tools

    Science.gov (United States)

    Levinson, Zac; Smith, Jack S.; Fenger, Germain; Smith, Bruce W.

    2016-03-01

    Pupil plane characterization will play a critical role in image process optimization for EUV lithography (EUVL), as it has for several lithography generations. In EUVL systems there is additional importance placed on understanding the ways that thermally-induced system drift affect pupil variation during operation. In-situ full pupil characterization is therefore essential for these tools. To this end we have developed Quick Inverse Pupil (QUIP)—a software suite developed for rapid characterization of pupil plane behavior based on images formed by that system. The software consists of three main components: 1) an image viewer, 2) the model builder, and 3) the wavefront analyzer. The image viewer analyzes CDSEM micrographs or actinic mask micrographs to measure either CDs or through-focus intensity volumes. The software is capable of rotation correction and image registration with subpixel accuracy. The second component pre-builds a model for a particular imaging system to enable rapid pupil characterization. Finally, the third component analyzes the results from the image viewer and uses the optional pre-built model for inverse solutions of pupil plane behavior. Both pupil amplitude and phase variation can be extracted using this software. Inverse solutions are obtained through a model based algorithm which is built on top of commercial rigorous full-vector simulation software.

  17. AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THRESHOLDING AND SVM

    Directory of Open Access Journals (Sweden)

    Weiwei Gao

    2016-02-01

    Full Text Available One common cause of visual impairment among people of working age in the industrialized countries is Diabetic Retinopathy (DR. Automatic recognition of hard exudates (EXs which is one of DR lesions in fundus images can contribute to the diagnosis and screening of DR.The aim of this paper was to automatically detect those lesions from fundus images. At first,green channel of each original fundus image was segmented by improved Otsu thresholding based on minimum inner-cluster variance, and candidate regions of EXs were obtained. Then, we extracted features of candidate regions and selected a subset which best discriminates EXs from the retinal background by means of logistic regression (LR. The selected features were subsequently used as inputs to a SVM to get a final segmentation result of EXs in the image. Our database was composed of 120 images with variable color, brightness, and quality. 70 of them were used to train the SVM and the remaining 50 to assess the performance of the method. Using a lesion based criterion, we achieved a mean sensitivity of 95.05% and a mean positive predictive value of 95.37%. With an image-based criterion, our approach reached a 100% mean sensitivity, 90.9% mean specificity and 96.0% mean accuracy. Furthermore, the average time cost in processing an image is 8.31 seconds. These results suggest that the proposed method could be a diagnostic aid for ophthalmologists in the screening for DR.

  18. MO-G-BRE-03: Automated Continuous Monitoring of Patient Setup with Second-Check Independent Image Registration

    International Nuclear Information System (INIS)

    Purpose: To create a non-supervised quality assurance program to monitor image-based patient setup. The system acts a secondary check by independently computing shifts and rotations and interfaces with Varian's database to verify therapist's work and warn against sub-optimal setups. Methods: Temporary digitally-reconstructed radiographs (DRRs) and OBI radiographic image files created by Varian's treatment console during patient setup are intercepted and used as input in an independent registration module customized for accuracy that determines the optimal rotations and shifts. To deal with the poor quality of OBI images, a histogram equalization of the live images to the DDR counterparts is performed as a pre-processing step. A search for the most sensitive metric was performed by plotting search spaces subject to various translations and convergence analysis was applied to ensure the optimizer finds the global minima. Final system configuration uses the NCC metric with 150 histogram bins and a one plus one optimizer running for 2000 iterations with customized scales for translations and rotations in a multi-stage optimization setup that first corrects and translations and subsequently rotations. Results: The system was installed clinically to monitor and provide almost real-time feedback on patient positioning. On a 2 month-basis uncorrected pitch values were of a mean 0.016° with standard deviation of 1.692°, and couch rotations of − 0.090°± 1.547°. The couch shifts were −0.157°±0.466° cm for the vertical, 0.045°±0.286 laterally and 0.084°± 0.501° longitudinally. Uncorrected pitch angles were the most common source of discrepancies. Large variations in the pitch angles were correlated with patient motion inside the mask. Conclusion: A system for automated quality assurance of therapist's registration was designed and tested in clinical practice. The approach complements the clinical software's automated registration in

  19. An Automated System for Detecting Sigmoids in Solar X-ray Images

    Science.gov (United States)

    LaBonte, B. J.; Rust, D. M.; Bernasconi, P. N.

    2003-05-01

    The probability of a coronal mass ejection (CME) occurring is linked to the appearance of structures, called sigmoids, in satellite X-ray images of the sun. By examination of near real time images, we can detect sigmoids visually and estimate the probability of a CME and the probability that it will cause a major geomagnetic storm. We have devised a pattern recognition system to detect the sigmoids in Yohkoh SXT and GOES SXI X-ray images automatically. When implemented in a near real time environment, this system should allow long term, 3 - 7 day, forecasts of CMEs and their potential for causing major geomagnetic storms.

  20. Automated Detection of Healthy and Diseased Aortae from Images Obtained by Contrast-Enhanced CT Scan

    Directory of Open Access Journals (Sweden)

    Michael Gayhart

    2013-01-01

    Full Text Available Purpose. We developed the next stage of our computer assisted diagnosis (CAD system to aid radiologists in evaluating CT images for aortic disease by removing innocuous images and highlighting signs of aortic disease. Materials and Methods. Segmented data of patient’s contrast-enhanced CT scan was analyzed for aortic dissection and penetrating aortic ulcer (PAU. Aortic dissection was detected by checking for an abnormal shape of the aorta using edge oriented methods. PAU was recognized through abnormally high intensities with interest point operators. Results. The aortic dissection detection process had a sensitivity of 0.8218 and a specificity of 0.9907. The PAU detection process scored a sensitivity of 0.7587 and a specificity of 0.9700. Conclusion. The aortic dissection detection process and the PAU detection process were successful in removing innocuous images, but additional methods are necessary for improving recognition of images with aortic disease.

  1. Automated identification of diploid reference cells in cervical smears using image analysis.

    NARCIS (Netherlands)

    Laak, J.A.W.M. van der; Siebers, A.G.; Cuijpers, V.M.J.I.; Pahlplatz, M.M.M.; Wilde, P.C.M. de; Hanselaar, A.G.J.M.

    2002-01-01

    BACKGROUND: Acquisition of DNA ploidy histograms by image analysis may yield important information regarding the behavior of premalignant cervical lesions. Accurate selection of nuclei for DNA measurement is an important prerequisite for obtaining reliable data. Traditionally, manual selection of nu

  2. An Automated MR Image Segmentation System Using Multi-layer Perceptron Neural Network

    OpenAIRE

    Amiri, S.; Movahedi, M M; Kazemi, K; Parsaei, H

    2013-01-01

    Background: Brain tissue segmentation for delineation of 3D anatomical structures from magnetic resonance (MR) images can be used for neuro-degenerative disorders, characterizing morphological differences between subjects based on volumetric analysis of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF), but only if the obtained segmentation results are correct. Due to image artifacts such as noise, low contrast and intensity non-uniformity, there are some classification errors...

  3. Automated measurement of epidermal thickness from optical coherence tomography images using line region growing

    Science.gov (United States)

    Delacruz, Jomer; Weissman, Jesse; Gossage, Kirk

    2010-02-01

    Optical Coherence Tomography (OCT) is a non-invasive imaging modality that acquires cross sectional images of tissue in-vivo. It accelerates skin diagnosis by eliminating invasive biopsy and laborious histology in the process. Dermatologists have widely used it for looking at morphology of skin diseases such as psoriasis, dermatitis, basal cell carcinoma etc. Skin scientists have also successfully used it for looking at differences in epidermal thickness and its underlying structure with respect to age, body sites, ethnicity, gender, and other related factors. Similar to other in-vivo imaging systems, OCT images suffer from a high degree of speckle and noise content, which hinders examination of tissue structures. Most of the previous work in OCT segmentation of skin was done manually. This compromised the quality of the results by limiting the analyses to a few frames per area. In this paper, we discuss a region growing method for automatic identification of the upper and lower boundaries of the epidermis in living human skin tissue. This image analysis method utilizes images obtained from a frequency-domain OCT. This system is high-resolution and high-speed, and thus capable of capturing volumetric images of the skin in short time. The three-dimensional (3D) data provides additional information that is used in the segmentation process to help compensate for the inherent noise in the images. This method not only provides a better estimation of the epidermal thickness, but also generates a 3D surface map of the epidermal-dermal junction, from which underlying topography can be visualized and further quantified.

  4. Statistical colour models: an automated digital image analysis method for quantification of histological biomarkers

    OpenAIRE

    Shu, Jie; Dolman, G. E.; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad

    2016-01-01

    Background Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Methods Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To ...

  5. Renal Graft Fibrosis and Inflammation Quantification by an Automated Fourier-Transform Infrared Imaging Technique.

    Science.gov (United States)

    Vuiblet, Vincent; Fere, Michael; Gobinet, Cyril; Birembaut, Philippe; Piot, Olivier; Rieu, Philippe

    2016-08-01

    Renal interstitial fibrosis and interstitial active inflammation are the main histologic features of renal allograft biopsy specimens. Fibrosis is currently assessed by semiquantitative subjective analysis, and color image analysis has been developed to improve the reliability and repeatability of this evaluation. However, these techniques fail to distinguish fibrosis from constitutive collagen or active inflammation. We developed an automatic, reproducible Fourier-transform infrared (FTIR) imaging-based technique for simultaneous quantification of fibrosis and inflammation in renal allograft biopsy specimens. We generated and validated a classification model using 49 renal biopsy specimens and subsequently tested the robustness of this classification algorithm on 166 renal grafts. Finally, we explored the clinical relevance of fibrosis quantification using FTIR imaging by comparing results with renal function at 3 months after transplantation (M3) and the variation of renal function between M3 and M12. We showed excellent robustness for fibrosis and inflammation classification, with >90% of renal biopsy specimens adequately classified by FTIR imaging. Finally, fibrosis quantification by FTIR imaging correlated with renal function at M3, and the variation in fibrosis between M3 and M12 correlated well with the variation in renal function over the same period. This study shows that FTIR-based analysis of renal graft biopsy specimens is a reproducible and reliable label-free technique for quantifying fibrosis and active inflammation. This technique seems to be more relevant than digital image analysis and promising for both research studies and routine clinical practice.

  6. Offset-sparsity decomposition for automated enhancement of color microscopic image of stained specimen in histopathology

    Science.gov (United States)

    Kopriva, Ivica; Hadžija, Marijana Popović; Hadžija, Mirko; Aralica, Gorana

    2015-07-01

    We propose an offset-sparsity decomposition method for the enhancement of a color microscopic image of a stained specimen. The method decomposes vectorized spectral images into offset terms and sparse terms. A sparse term represents an enhanced image, and an offset term represents a "shadow." The related optimization problem is solved by computational improvement of the accelerated proximal gradient method used initially to solve the related rank-sparsity decomposition problem. Removal of an image-adapted color offset yields an enhanced image with improved colorimetric differences among the histological structures. This is verified by a no-reference colorfulness measure estimated from 35 specimens of the human liver, 1 specimen of the mouse liver stained with hematoxylin and eosin, 6 specimens of the mouse liver stained with Sudan III, and 3 specimens of the human liver stained with the anti-CD34 monoclonal antibody. The colorimetric difference improves on average by 43.86% with a 99% confidence interval (CI) of [35.35%, 51.62%]. Furthermore, according to the mean opinion score, estimated on the basis of the evaluations of five pathologists, images enhanced by the proposed method exhibit an average quality improvement of 16.60% with a 99% CI of [10.46%, 22.73%].

  7. Automated analysis of phantom images for the evaluation of long-term reproducibility in digital mammography

    Energy Technology Data Exchange (ETDEWEB)

    Gennaro, G [Department of Oncological and Surgical Sciences, University of Padova, via Gattamelata 64, 35128 Padova (Italy); Ferro, F [Department of Oncological and Surgical Sciences, University of Padova, via Gattamelata 64, 35128 Padova (Italy); Contento, G [Cyberqual S.r.l., Gorizia (Italy); Fornasin, F [Cyberqual S.r.l., Gorizia (Italy); Di Maggio, C [Department of Oncological and Surgical Sciences, University of Padova, via Gattamelata 64, 35128 Padova (Italy)

    2007-03-07

    The performance of an automatic software package was evaluated with phantom images acquired by a full-field digital mammography unit. After the validation, the software was used, together with a Leeds TORMAS test object, to model the image acquisition process. Process modelling results were used to evaluate the sensitivity of the method in detecting changes of exposure parameters from routine image quality measurements in digital mammography, which is the ultimate purpose of long-term reproducibility tests. Image quality indices measured by the software included the mean pixel value and standard deviation of circular details and surrounding background, contrast-to-noise ratio and relative contrast; detail counts were also collected. The validation procedure demonstrated that the software localizes the phantom details correctly and the difference between automatic and manual measurements was within few grey levels. Quantitative analysis showed sufficient sensitivity to relate fluctuations in exposure parameters (kV{sub p} or mAs) to variations in image quality indices. In comparison, detail counts were found less sensitive in detecting image quality changes, even when limitations due to observer subjectivity were overcome by automatic analysis. In conclusion, long-term reproducibility tests provided by the Leeds TORMAS phantom with quantitative analysis of multiple IQ indices have been demonstrated to be effective in predicting causes of deviation from standard operating conditions and can be used to monitor stability in full-field digital mammography.

  8. A learning-based approach for automated quality assessment of computer-rendered images

    Science.gov (United States)

    Zhang, Xi; Agam, Gady

    2012-01-01

    Computer generated images are common in numerous computer graphics applications such as games, modeling, and simulation. There is normally a tradeoff between the time allocated to the generation of each image frame and and the quality of the image, where better quality images require more processing time. Specifically, in the rendering of 3D objects, the surfaces of objects may be manipulated by subdividing them into smaller triangular patches and/or smoothing them so as to produce better looking renderings. Since unnecessary subdivision results in increased rendering time and unnecessary smoothing results in reduced details, there is a need to automatically determine the amount of necessary processing for producing good quality rendered images. In this paper we propose a novel supervised learning based methodology for automatically predicting the quality of rendered images of 3D objects. To perform the prediction we train on a data set which is labeled by human observers for quality. We are then able to predict the quality of renderings (not used in the training) with an average prediction error of roughly 20%. The proposed approach is compared to known techniques and is shown to produce better results.

  9. LANDSAT image differencing as an automated land cover change detection technique

    Science.gov (United States)

    Stauffer, M. L.; Mckinney, R. L.

    1978-01-01

    Image differencing was investigated as a technique for use with LANDSAT digital data to delineate areas of land cover change in an urban environment. LANDSAT data collected in April 1973 and April 1975 for Austin, Texas, were geometrically corrected and precisely registered to United States Geological Survey 7.5-minute quadrangle maps. At each pixel location reflectance values for the corresponding bands were subtracted to produce four difference images. Areas of major reflectance differences are isolated by thresholding each of the difference images. The resulting images are combined to obtain an image data set to total change. These areas of reflectance differences were found, in general, to correspond to areas of land cover change. Information on areas of land cover change was incorporated into a procedure to mask out all nonchange areas and perform an unsupervised classification only for data in the change areas. This procedure identified three broad categories: (1) areas of high reflectance (construction or extractive), (2) changes in agricultural areas, and (3) areas of confusion between agricultural and other areas.

  10. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features

    Science.gov (United States)

    Yu, Kun-Hsing; Zhang, Ce; Berry, Gerald J.; Altman, Russ B.; Ré, Christopher; Rubin, Daniel L.; Snyder, Michael

    2016-01-01

    Lung cancer is the most prevalent cancer worldwide, and histopathological assessment is indispensable for its diagnosis. However, human evaluation of pathology slides cannot accurately predict patients' prognoses. In this study, we obtain 2,186 haematoxylin and eosin stained histopathology whole-slide images of lung adenocarcinoma and squamous cell carcinoma patients from The Cancer Genome Atlas (TCGA), and 294 additional images from Stanford Tissue Microarray (TMA) Database. We extract 9,879 quantitative image features and use regularized machine-learning methods to select the top features and to distinguish shorter-term survivors from longer-term survivors with stage I adenocarcinoma (P<0.003) or squamous cell carcinoma (P=0.023) in the TCGA data set. We validate the survival prediction framework with the TMA cohort (P<0.036 for both tumour types). Our results suggest that automatically derived image features can predict the prognosis of lung cancer patients and thereby contribute to precision oncology. Our methods are extensible to histopathology images of other organs. PMID:27527408

  11. Immunohistochemical Ki-67/KL1 double stains increase accuracy of Ki-67 indices in breast cancer and simplify automated image analysis

    DEFF Research Database (Denmark)

    Nielsen, Patricia S; Bentzer, Nina K; Jensen, Vibeke;

    2014-01-01

    by digital image analysis. This study aims to detect the difference in accuracy and precision between manual indices of single and double stains, to develop an automated quantification of double stains, and to explore the relation between automated indices and tumor characteristics when quantified...... in different regions: hot spots, global tumor areas, and invasive fronts. MATERIALS AND METHODS: Paraffin-embedded, formalin-fixed tissue from 100 consecutive patients with invasive breast cancer was immunohistochemically stained for Ki-67 and Ki-67/KL1. Ki-67 was manually scored in different regions by 2...

  12. Automated Artifact Rejection for Transient Identification in WFC3 IR Image Subtractions

    Science.gov (United States)

    Luther, Kyle; Boone, Kyle; Hayden, Brian; Aldering, Greg Scott; Perlmutter, Saul; Supernova Cosmology Project

    2016-01-01

    We describe a set of features for identifying z>1 supernovae in reference-subtracted HST WFC3 IR images produced by the Supernova Cosmology Project's pipeline for the "See Change" project. These features, in combination with a random forest classifier, yield an effective system for automatically discriminating between supernovae and image artifacts produced by the instrumentation and the image processing procedure. When k-fold cross-validation is performed using a set of 30,000 artifacts and 10,000 synthetic supernovae, the classifier gives an efficiency comparable to that of a human scanner, correctly identifying 97 percent of the synthetic supernovae while rejecting 95 percent of the artifacts. This software will allow for less labor-intensive transient search procedures by automatically rejecting artifacts that would otherwise require human review.

  13. Automated Classification Of Scanning Electron Microscope Particle Images Using Morphological Analysis

    Science.gov (United States)

    Lamarche, B. L.; Lewis, R. R.; Girvin, D. C.; McKinley, J. P.

    2008-12-01

    We are developing a software tool that can automatically classify anthropogenic and natural aerosol particulates using morphological analysis. Our method was developed using SEM (background and secondary electron) images of single particles. Particle silhouettes are detected and converted into polygons using Intel's OpenCV image processing library. Our analysis then proceeds independently for the two kinds of images. Analysis of secondary images concerns itself solely with the silhouette and seeks to quantify its shape and roughness. Traversing the polygon with spline interpolation, we uniformly sample k(s), the signed curvature of the silhouette's path as a function of distance along the perimeter s. k(s) is invariant under rotation and translation. The power spectrum of k(s) qualitatively shows both shape and roughness: more power at low frequencies indicates variation in shape; more power at higher frequencies indicates a rougher silhouette. We present a series of filters (low-, band-, and high-pass) which we convolve with k(s) to yield a set of parameters that characterize the shape and roughness numerically. Analysis of backscatter images focuses on the (visual) texture, which is the result of both composition and geometry. Using the silhouette as a boundary, we compute the variogram, a statistical measure of inter-pixel covariance as a function of distance. Variograms take on characteristic curves, which we fit with a heuristic, asymptotic function that uses a small set of parameters. The combination of silhouette and variogram fit parameters forms the basis of a multidimensional classification space whose dimensionality we may reduce by principal component analysis and whose region boundaries allow us to classify new particles. This analysis is performed without a priori knowledge of other physical, chemical, or climatic properties. The method will be adapted to multi-particulate images.

  14. Automated microaneurysm detection method based on double ring filter in retinal fundus images

    Science.gov (United States)

    Mizutani, Atsushi; Muramatsu, Chisako; Hatanaka, Yuji; Suemori, Shinsuke; Hara, Takeshi; Fujita, Hiroshi

    2009-02-01

    The presence of microaneurysms in the eye is one of the early signs of diabetic retinopathy, which is one of the leading causes of vision loss. We have been investigating a computerized method for the detection of microaneurysms on retinal fundus images, which were obtained from the Retinopathy Online Challenge (ROC) database. The ROC provides 50 training cases, in which "gold standard" locations of microaneurysms are provided, and 50 test cases without the gold standard locations. In this study, the computerized scheme was developed by using the training cases. Although the results for the test cases are also included, this paper mainly discusses the results for the training cases because the "gold standard" for the test cases is not known. After image preprocessing, candidate regions for microaneurysms were detected using a double-ring filter. Any potential false positives located in the regions corresponding to blood vessels were removed by automatic extraction of blood vessels from the images. Twelve image features were determined, and the candidate lesions were classified into microaneurysms or false positives using the rule-based method and an artificial neural network. The true positive fraction of the proposed method was 0.45 at 27 false positives per image. Forty-two percent of microaneurysms in the 50 training cases were considered invisible by the consensus of two co-investigators. When the method was evaluated for visible microaneurysms, the sensitivity for detecting microaneurysms was 65% at 27 false positives per image. Our computerized detection scheme could be improved for helping ophthalmologists in the early diagnosis of diabetic retinopathy.

  15. Automated Atlas-Based Segmentation of Brain Structures in MR Images: Application to a Population-Based Imaging Study

    NARCIS (Netherlands)

    F. van der Lijn (Fedde)

    2010-01-01

    textabstractThe final type of segmentationmethod is atlas-based segmentation (sometimes also called label propagation). In this approach, additional knowledge is introduced through an atlas image, in which an expert has labeled the brain structures of interest. The atlas is first registered to the t

  16. Automated detection of kinks from blood vessels for optic cup segmentation in retinal images

    Science.gov (United States)

    Wong, D. W. K.; Liu, J.; Lim, J. H.; Li, H.; Wong, T. Y.

    2009-02-01

    The accurate localization of the optic cup in retinal images is important to assess the cup to disc ratio (CDR) for glaucoma screening and management. Glaucoma is physiologically assessed by the increased excavation of the optic cup within the optic nerve head, also known as the optic disc. The CDR is thus an important indicator of risk and severity of glaucoma. In this paper, we propose a method of determining the cup boundary using non-stereographic retinal images by the automatic detection of a morphological feature within the optic disc known as kinks. Kinks are defined as the bendings of small vessels as they traverse from the disc to the cup, providing physiological validation for the cup boundary. To detect kinks, localized patches are first generated from a preliminary cup boundary obtained via level set. Features obtained using edge detection and wavelet transform are combined using a statistical approach rule to identify likely vessel edges. The kinks are then obtained automatically by analyzing the detected vessel edges for angular changes, and these kinks are subsequently used to obtain the cup boundary. A set of retinal images from the Singapore Eye Research Institute was obtained to assess the performance of the method, with each image being clinically graded for the CDR. From experiments, when kinks were used, the error on the CDR was reduced to less than 0.1 CDR units relative to the clinical CDR, which is within the intra-observer variability of 0.2 CDR units.

  17. Enabling automated magnetic resonance imaging-based targeting assessment during dipole field navigation

    Science.gov (United States)

    Latulippe, Maxime; Felfoul, Ouajdi; Dupont, Pierre E.; Martel, Sylvain

    2016-02-01

    The magnetic navigation of drugs in the vascular network promises to increase the efficacy and reduce the secondary toxicity of cancer treatments by targeting tumors directly. Recently, dipole field navigation (DFN) was proposed as the first method achieving both high field and high navigation gradient strengths for whole-body interventions in deep tissues. This is achieved by introducing large ferromagnetic cores around the patient inside a magnetic resonance imaging (MRI) scanner. However, doing so distorts the static field inside the scanner, which prevents imaging during the intervention. This limitation constrains DFN to open-loop navigation, thus exposing the risk of a harmful toxicity in case of a navigation failure. Here, we are interested in periodically assessing drug targeting efficiency using MRI even in the presence of a core. We demonstrate, using a clinical scanner, that it is in fact possible to acquire, in specific regions around a core, images of sufficient quality to perform this task. We show that the core can be moved inside the scanner to a position minimizing the distortion effect in the region of interest for imaging. Moving the core can be done automatically using the gradient coils of the scanner, which then also enables the core to be repositioned to perform navigation to additional targets. The feasibility and potential of the approach are validated in an in vitro experiment demonstrating navigation and assessment at two targets.

  18. DetectTLC: Automated Reaction Mixture Screening Utilizing Quantitative Mass Spectrometry Image Feature

    Science.gov (United States)

    Kaddi, Chanchala D.; Bennett, Rachel V.; Paine, Martin R. L.; Banks, Mitchel D.; Weber, Arthur L.; Fernández, Facundo M.; Wang, May D.

    2016-01-01

    Full characterization of complex reaction mixtures is necessary to understand mechanisms, optimize yields, and elucidate secondary reaction pathways. Molecular-level information for species in such mixtures can be readily obtained by coupling mass spectrometry imaging (MSI) with thin layer chromatography (TLC) separations. User-guided investigation of imaging data for mixture components with known m/z values is generally straightforward; however, spot detection for unknowns is highly tedious, and limits the applicability of MSI in conjunction with TLC. To accelerate imaging data mining, we developed DetectTLC, an approach that automatically identifies m/z values exhibiting TLC spot-like regions in MS molecular images. Furthermore, DetectTLC can also spatially match m/z values for spots acquired during alternating high and low collision-energy scans, pairing product ions with precursors to enhance structural identification. As an example, DetectTLC is applied to the identification and structural confirmation of unknown, yet significant, products of abiotic pyrazinone and aminopyrazine nucleoside analog synthesis. PMID:26508443

  19. An automated cloud detection method based on green channel of total sky visible images

    Directory of Open Access Journals (Sweden)

    J. Yang

    2015-05-01

    Full Text Available Getting an accurate cloud cover state is a challenging task. In the past, traditional two-dimensional red-to-blue band methods have been widely used for cloud detection in total sky images. By analyzing the imaging principle of cameras, green channel has been selected to replace the 2-D red-to-blue band for total sky cloud detection. The brightness distribution in a total sky image is usually non-uniform, because of forward scattering and Mie scattering of aerosols, which results in increased detection errors in the circumsolar and near-horizon regions. This paper proposes an automatic cloud detection algorithm, "green channel background subtraction adaptive threshold" (GBSAT, which incorporates channel selection, background simulation, computation of solar mask and cloud mask, subtraction, adaptive threshold, and binarization. Several experimental cases show that the GBSAT algorithm is robust for all types of test total sky images and has more complete and accurate retrievals of visual effects than those found through traditional methods.

  20. Automated image segmentation of haematoxylin and eosin stained skeletal muscle cross-sections

    DEFF Research Database (Denmark)

    Liu, F; Mackey, AL; Srikuea, R;

    2013-01-01

    . This procedure is labour-intensive and time-consuming. In this paper, we have developed and validated an automatic image segmentation algorithm that is not only efficient but also accurate. Our proposed automatic segmentation algorithm for haematoxylin and eosin stained skeletal muscle cross-sections consists...

  1. Automated detection of semagram-laden images using adaptive neural networks

    Science.gov (United States)

    Cerkez, Paul S.; Cannady, James D.

    2012-04-01

    Digital steganography is gaining wide acceptance in the world of electronic copyright stamping. Digital media that are easy to steal, such as graphics, photos and audio files, are being tagged with both visible and invisible copyright stamps (known as digital watermarking). However, these same techniques can also be used to hide communications between actors in criminal or covert activities. An inherent difficulty in detecting steganography is overcoming the variety of methods for hiding a message and the multitude of choices of available media. Another problem in steganography defense is the issue of detection speed since the encoded data is frequently time-sensitive. When a message is visually transmitted in a non-textual format (i.e., in an image) it is referred to as a semagram. Semagrams are relatively easy to create, but very difficult to detect. While steganography can often be identified by detecting digital modifications to an image's structure, an image-based semagram is more difficult because the message is the image itself. The work presented describes the creation of a novel, computer-based application, which uses hybrid hierarchical neural network architecture to detect the likely presence of a semagram message in an image. The prototype system was used to detect semagrams containing Morse Code messages. Based on the results of these experiments our approach provides a significant advance in the detection of complex semagram patterns. Specific results of the experiments and the potential practical applications of the neural network-based technology are discussed. This presentation provides the final results of our research experiments.

  2. Automated diagnosis of interstitial lung diseases and emphysema in MDCT imaging

    Science.gov (United States)

    Fetita, Catalin; Chang Chien, Kuang-Che; Brillet, Pierre-Yves; Prêteux, Françoise

    2007-09-01

    Diffuse lung diseases (DLD) include a heterogeneous group of non-neoplasic disease resulting from damage to the lung parenchyma by varying patterns of inflammation. Characterization and quantification of DLD severity using MDCT, mainly in interstitial lung diseases and emphysema, is an important issue in clinical research for the evaluation of new therapies. This paper develops a 3D automated approach for detection and diagnosis of diffuse lung diseases such as fibrosis/honeycombing, ground glass and emphysema. The proposed methodology combines multi-resolution 3D morphological filtering (exploiting the sup-constrained connection cost operator) and graph-based classification for a full characterization of the parenchymal tissue. The morphological filtering performs a multi-level segmentation of the low- and medium-attenuated lung regions as well as their classification with respect to a granularity criterion (multi-resolution analysis). The original intensity range of the CT data volume is thus reduced in the segmented data to a number of levels equal to the resolution depth used (generally ten levels). The specificity of such morphological filtering is to extract tissue patterns locally contrasting with their neighborhood and of size inferior to the resolution depth, while preserving their original shape. A multi-valued hierarchical graph describing the segmentation result is built-up according to the resolution level and the adjacency of the different segmented components. The graph nodes are then enriched with the textural information carried out by their associated components. A graph analysis-reorganization based on the nodes attributes delivers the final classification of the lung parenchyma in normal and ILD/emphysematous regions. It also makes possible to discriminate between different types, or development stages, among the same class of diseases.

  3. Automated hexahedral mesh generation from biomedical image data: applications in limb prosthetics.

    Science.gov (United States)

    Zachariah, S G; Sanders, J E; Turkiyyah, G M

    1996-06-01

    A general method to generate hexahedral meshes for finite element analysis of residual limbs and similar biomedical geometries is presented. The method utilizes skeleton-based subdivision of cross-sectional domains to produce simple subdomains in which structured meshes are easily generated. Application to a below-knee residual limb and external prosthetic socket is described. The residual limb was modeled as consisting of bones, soft tissue, and skin. The prosthetic socket model comprised a socket wall with an inner liner. The geometries of these structures were defined using axial cross-sectional contour data from X-ray computed tomography, optical scanning, and mechanical surface digitization. A tubular surface representation, using B-splines to define the directrix and generator, is shown to be convenient for definition of the structure geometries. Conversion of cross-sectional data to the compact tubular surface representation is direct, and the analytical representation simplifies geometric querying and numerical optimization within the mesh generation algorithms. The element meshes remain geometrically accurate since boundary nodes are constrained to lie on the tubular surfaces. Several element meshes of increasing mesh density were generated for two residual limbs and prosthetic sockets. Convergence testing demonstrated that approximately 19 elements are required along a circumference of the residual limb surface for a simple linear elastic model. A model with the fibula absent compared with the same geometry with the fibula present showed differences suggesting higher distal stresses in the absence of the fibula. Automated hexahedral mesh generation algorithms for sliced data represent an advancement in prosthetic stress analysis since they allow rapid modeling of any given residual limb and optimization of mesh parameters.

  4. Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors: Automated measurement development for full field digital mammography

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, E. E.; Sellers, T. A.; Lu, B. [Department of Cancer Epidemiology, Division of Population Sciences, H. Lee Moffitt Cancer Center, Tampa, Florida 33612 (United States); Heine, J. J. [Department of Cancer Imaging and Metabolism, H. Lee Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: The Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors are used for standardized mammographic reporting and are assessed visually. This reporting is clinically relevant because breast composition can impact mammographic sensitivity and is a breast cancer risk factor. New techniques are presented and evaluated for generating automated BI-RADS breast composition descriptors using both raw and calibrated full field digital mammography (FFDM) image data.Methods: A matched case-control dataset with FFDM images was used to develop three automated measures for the BI-RADS breast composition descriptors. Histograms of each calibrated mammogram in the percent glandular (pg) representation were processed to create the new BR{sub pg} measure. Two previously validated measures of breast density derived from calibrated and raw mammograms were converted to the new BR{sub vc} and BR{sub vr} measures, respectively. These three measures were compared with the radiologist-reported BI-RADS compositions assessments from the patient records. The authors used two optimization strategies with differential evolution to create these measures: method-1 used breast cancer status; and method-2 matched the reported BI-RADS descriptors. Weighted kappa (κ) analysis was used to assess the agreement between the new measures and the reported measures. Each measure's association with breast cancer was evaluated with odds ratios (ORs) adjusted for body mass index, breast area, and menopausal status. ORs were estimated as per unit increase with 95% confidence intervals.Results: The three BI-RADS measures generated by method-1 had κ between 0.25–0.34. These measures were significantly associated with breast cancer status in the adjusted models: (a) OR = 1.87 (1.34, 2.59) for BR{sub pg}; (b) OR = 1.93 (1.36, 2.74) for BR{sub vc}; and (c) OR = 1.37 (1.05, 1.80) for BR{sub vr}. The measures generated by method-2 had κ between 0.42–0.45. Two of these

  5. Automated classification of histopathology images of prostate cancer using a Bag-of-Words approach

    Science.gov (United States)

    Sanghavi, Foram M.; Agaian, Sos S.

    2016-05-01

    The goals of this paper are (1) test the Computer Aided Classification of the prostate cancer histopathology images based on the Bag-of-Words (BoW) approach (2) evaluate the performance of the classification grade 3 and 4 of the proposed method using the results of the approach proposed by the authors Khurd et al. in [9] and (3) classify the different grades of cancer namely, grade 0, 3, 4, and 5 using the proposed approach. The system performance is assessed using 132 prostate cancer histopathology of different grades. The system performance of the SURF features are also analyzed by comparing the results with SIFT features using different cluster sizes. The results show 90.15% accuracy in detection of prostate cancer images using SURF features with 75 clusters for k-mean clustering. The results showed higher sensitivity for SURF based BoW classification compared to SIFT based BoW.

  6. Automated Alignment and On-Sky Performance of the Gemini Planet Imager Coronagraph

    CERN Document Server

    Savransky, Dmitry; Poyneer, Lisa A; Dunn, Jennifer; Macintosh, Bruce A; Sadakuni, Naru; Dillon, Daren; Goodsell, Stephen J; Hartung, Markus; Hibon, Pascale; Rantakyrö, Fredrik; Cardwell, Andrew; Serio, Andrew

    2014-01-01

    The Gemini Planet Imager (GPI) is a next-generation, facility instrument currently being commissioned at the Gemini South observatory. GPI combines an extreme adaptive optics system and integral field spectrograph (IFS) with an apodized-pupil Lyot coronagraph (APLC) producing an unprecedented capability for directly imaging and spectroscopically characterizing extrasolar planets. GPI's operating goal of $10^{-7}$ contrast requires very precise alignments between the various elements of the coronagraph (two pupil masks and one focal plane mask) and active control of the beam path throughout the instrument. Here, we describe the techniques used to automatically align GPI and maintain the alignment throughout the course of science observations. We discuss the particular challenges of maintaining precision alignments on a Cassegrain mounted instrument and strategies that we have developed that allow GPI to achieve high contrast even in poor seeing conditions.

  7. Digital Rocks Portal: a sustainable platform for imaged dataset sharing, translation and automated analysis

    Science.gov (United States)

    Prodanovic, M.; Esteva, M.; Hanlon, M.; Nanda, G.; Agarwal, P.

    2015-12-01

    Recent advances in imaging have provided a wealth of 3D datasets that reveal pore space microstructure (nm to cm length scale) and allow investigation of nonlinear flow and mechanical phenomena from first principles using numerical approaches. This framework has popularly been called "digital rock physics". Researchers, however, have trouble storing and sharing the datasets both due to their size and the lack of standardized image types and associated metadata for volumetric datasets. This impedes scientific cross-validation of the numerical approaches that characterize large scale porous media properties, as well as development of multiscale approaches required for correct upscaling. A single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal, that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Once widely accepter, the repository will jumpstart productivity and enable scientific inquiry and engineering decisions founded on a data-driven basis. This is the first repository of its kind. We show initial results on incorporating essential software tools and pipelines that make it easier for researchers to store and reuse data, and for educators to quickly visualize and illustrate concepts to a wide audience. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research

  8. OpenComet: An automated tool for comet assay image analysis

    OpenAIRE

    Gyori, Benjamin M.; Gireedhar Venkatachalam; Thiagarajan, P. S.; David Hsu; Marie-Veronique Clement

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires ...

  9. CT-guided automated detection of lung tumors on PET images

    Science.gov (United States)

    Cui, Yunfeng; Zhao, Binsheng; Akhurst, Timothy J.; Yan, Jiayong; Schwartz, Lawrence H.

    2008-03-01

    The calculation of standardized uptake values (SUVs) in tumors on serial [ 18F]2-fluoro-2-deoxy-D-glucose ( 18F-FDG) positron emission tomography (PET) images is often used for the assessment of therapy response. We present a computerized method that automatically detects lung tumors on 18F-FDG PET/Computed Tomography (CT) images using both anatomic and metabolic information. First, on CT images, relevant organs, including lung, bone, liver and spleen, are automatically identified and segmented based on their locations and intensity distributions. Hot spots (SUV >= 1.5) on 18F-FDG PET images are then labeled using the connected component analysis. The resultant "hot objects" (geometrically connected hot spots in three dimensions) that fall into, reside at the edges or are in the vicinity of the lungs are considered as tumor candidates. To determine true lesions, further analyses are conducted, including reduction of tumor candidates by the masking out of hot objects within CT-determined normal organs, and analysis of candidate tumors' locations, intensity distributions and shapes on both CT and PET. The method was applied to 18F-FDG-PET/CT scans from 9 patients, on which 31 target lesions had been identified by a nuclear medicine radiologist during a Phase II lung cancer clinical trial. Out of 31 target lesions, 30 (97%) were detected by the computer method. However, sensitivity and specificity were not estimated because not all lesions had been marked up in the clinical trial. The method effectively excluded the hot spots caused by mediastinum, liver, spleen, skeletal muscle and bone metastasis.

  10. Automated iterative neutrosophic lung segmentation for image analysis in thoracic computed tomography

    OpenAIRE

    Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Kazerooni, Ella A.

    2013-01-01

    Purpose: Lung segmentation is a fundamental step in many image analysis applications for lung diseases and abnormalities in thoracic computed tomography (CT). The authors have previously developed a lung segmentation method based on expectation-maximization (EM) analysis and morphological operations (EMM) for our computer-aided detection (CAD) system for pulmonary embolism (PE) in CT pulmonary angiography (CTPA). However, due to the large variations in pathology that may be present in thoraci...

  11. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    Science.gov (United States)

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  12. Automated prediction of tissue outcome after acute ischemic stroke in computed tomography perfusion images

    Science.gov (United States)

    Vos, Pieter C.; Bennink, Edwin; de Jong, Hugo; Velthuis, Birgitta K.; Viergever, Max A.; Dankbaar, Jan Willem

    2015-03-01

    Assessment of the extent of cerebral damage on admission in patients with acute ischemic stroke could play an important role in treatment decision making. Computed tomography perfusion (CTP) imaging can be used to determine the extent of damage. However, clinical application is hindered by differences among vendors and used methodology. As a result, threshold based methods and visual assessment of CTP images has not yet shown to be useful in treatment decision making and predicting clinical outcome. Preliminary results in MR studies have shown the benefit of using supervised classifiers for predicting tissue outcome, but this has not been demonstrated for CTP. We present a novel method for the automatic prediction of tissue outcome by combining multi-parametric CTP images into a tissue outcome probability map. A supervised classification scheme was developed to extract absolute and relative perfusion values from processed CTP images that are summarized by a trained classifier into a likelihood of infarction. Training was performed using follow-up CT scans of 20 acute stroke patients with complete recanalization of the vessel that was occluded on admission. Infarcted regions were annotated by expert neuroradiologists. Multiple classifiers were evaluated in a leave-one-patient-out strategy for their discriminating performance using receiver operating characteristic (ROC) statistics. Results showed that a RandomForest classifier performed optimally with an area under the ROC of 0.90 for discriminating infarct tissue. The obtained results are an improvement over existing thresholding methods and are in line with results found in literature where MR perfusion was used.

  13. Semi-automated reconstruction of neural processes from large numbers of fluorescence images.

    Directory of Open Access Journals (Sweden)

    Ju Lu

    Full Text Available We introduce a method for large scale reconstruction of complex bundles of neural processes from fluorescent image stacks. We imaged yellow fluorescent protein labeled axons that innervated a whole muscle, as well as dendrites in cerebral cortex, in transgenic mice, at the diffraction limit with a confocal microscope. Each image stack was digitally re-sampled along an orientation such that the majority of axons appeared in cross-section. A region growing algorithm was implemented in the open-source Reconstruct software and applied to the semi-automatic tracing of individual axons in three dimensions. The progression of region growing is constrained by user-specified criteria based on pixel values and object sizes, and the user has full control over the segmentation process. A full montage of reconstructed axons was assembled from the approximately 200 individually reconstructed stacks. Average reconstruction speed is approximately 0.5 mm per hour. We found an error rate in the automatic tracing mode of approximately 1 error per 250 um of axonal length. We demonstrated the capacity of the program by reconstructing the connectome of motor axons in a small mouse muscle.

  14. An Optimized Clustering Approach for Automated Detection of White Matter Lesions in MRI Brain Images

    Directory of Open Access Journals (Sweden)

    M. Anitha

    2012-04-01

    Full Text Available Settings White Matter lesions (WMLs are small areas of dead cells found in parts of the brain. In general, it is difficult for medical experts to accurately quantify the WMLs due to decreased contrast between White Matter (WM and Grey Matter (GM. The aim of this paper is to
    automatically detect the White Matter Lesions which is present in the brains of elderly people. WML detection process includes the following stages: 1. Image preprocessing, 2. Clustering (Fuzzy c-means clustering, Geostatistical Possibilistic clustering and Geostatistical Fuzzy clustering and 3.Optimization using Particle Swarm Optimization (PSO. The proposed system is tested on a database of 208 MRI images. GFCM yields high sensitivity of 89%, specificity of 94% and overall accuracy of 93% over FCM and GPC. The clustered brain images are then subjected to Particle Swarm Optimization (PSO. The optimized result obtained from GFCM-PSO provides sensitivity of 90%, specificity of 94% and accuracy of 95%. The detection results reveals that GFCM and GFCMPSO better localizes the large regions of lesions and gives less false positive rate when compared to GPC and GPC-PSO which captures the largest loads of WMLs only in the upper ventral horns of the brain.

  15. Automated Counting of Rice Planthoppers in Paddy Fields Based on Image Processing

    Institute of Scientific and Technical Information of China (English)

    YAO Qing; XIAN Ding-xiang; LIU Qing-jie; YANG Bao-jun; DIAO Guang-qiang; TANG Jian

    2014-01-01

    A quantitative survey of rice planthoppers in paddy ifelds is important to assess the population density and make forecasting decisions. Manual rice planthopper survey methods in paddy ifelds are time-consuming, fatiguing and tedious. This paper describes a handheld device for easily capturing planthopper images on rice stems and an automatic method for counting rice planthoppers based on image processing. The handheld device consists of a digital camera with WiFi, a smartphone and an extrendable pole. The surveyor can use the smartphone to control the camera, which is ifxed on the front of the pole by WiFi, and to photograph planthoppers on rice stems. For the counting of planthoppers on rice stems, we adopt three layers of detection that involve the following:(a) the ifrst layer of detection is an AdaBoost classiifer based on Haar features;(b) the second layer of detection is a support vector machine (SVM) classiifer based on histogram of oriented gradient (HOG) features;(c) the third layer of detection is the threshold judgment of the three features. We use this method to detect and count whiteback planthoppers (Sogatella furcifera) on rice plant images and achieve an 85.2%detection rate and a 9.6%false detection rate. The method is easy, rapid and accurate for the assessment of the population density of rice planthoppers in paddy ifelds.

  16. Application of automated image analysis to the identification and extraction of recyclable plastic bottles

    Institute of Scientific and Technical Information of China (English)

    Edgar SCAVINO; Dzuraidah Abdul WAHAB; Aini HUSSAIN; Hassan BASRI; Mohd Marzuki MUSTAFA

    2009-01-01

    An experimental machine vision apparatus was used to identify and extract recyclable plastic bottles out of a conveyor belt. Color images were taken with a commercially available Webcam, and the recognition was performed by our homemade software, based on the shape and dimensions of object images. The software was able to manage multiple bottles in a single image and was additionally extended to cases involving touching bottles. The identification was fulfilled by comparing the set of measured features with an existing database and meanwhile integrating various recognition techniques such as minimum distance in the feature space, self-organized maps, and neural networks. The recognition system was tested on a set of 50 different bottles and provided so far an accuracy of about 97% on bottle identification. The extraction of the bottles was performed by means of a pneumatic arm, which was activated according to the plastic type; polyethylene-terephthalate (PET) bottles were left on the conveyor belt, while non-PET boules were extracted. The software was designed to provide the best compromise between reliability and speed for real-time applications in view of the commercialization of the system at existing recycling plants.

  17. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    Science.gov (United States)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  18. A comprehensive and precise quantification of the calanoid copepod Acartia tonsa (Dana) for intensive live feed cultures using an automated ZooImage system

    DEFF Research Database (Denmark)

    Vu, Minh Thi Thuy; Jepsen, Per Meyer; Hansen, Benni Winding

    2014-01-01

    ignored. In this study, we propose a novel method for highly precise classification of development stages and biomass of A. tonsa, in intensive live feed cultures, using an automated ZooImage system, a freeware image analysis. We successfully created a training set of 13 categories, including 7 copepod...... and 6 non-copepod (debris) groups. ZooImage used this training set for automatic discrimination through a random forest algorithm with the general accuracy of 92.8%. The ZooImage showed no significant difference in classifying solitary eggs, or mixed nauplii stages and copepodites compared to personal...... microscope observation. Furthermore, ZooImage was also adapted for automatic estimation of A. tonsa biomass. This is the first study that has successfully applied ZooImage software which enables fast and reliable quantification of the development stages and the biomass of A. tonsa. As a result, relevant...

  19. Reduced radiation dose and improved image quality at cardiovascular CT angiography by automated attenuation-based tube voltage selection: intra-individual comparison

    Energy Technology Data Exchange (ETDEWEB)

    Krazinski, Aleksander W.; Silverman, Justin R. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Meinel, Felix G.; Geyer, Lucas L. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Ludwig-Maximilians-University Hospital, Institute for Clinical Radiology, Munich (Germany); Schoepf, U.J. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Canstein, Christian [Siemens Healthcare, CT Division, Malvern, PA (United States); De Cecco, Carlo N. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' - Polo Pontino, Department of Radiological Sciences, Oncology and Pathology, Latina (Italy)

    2014-11-15

    To evaluate the effect of automated tube voltage selection on radiation dose and image quality at cardiovascular CT angiography (CTA). We retrospectively analysed paired studies in 72 patients (41 male, 60.5 ± 16.5 years), who had undergone CTA acquisitions of the heart or aorta both before and after the implementation of an automated x-ray tube voltage selection algorithm (ATVS). All other parameters were kept identical between the two acquisitions. Subjective image quality (IQ) was rated and objective IQ was measured by image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and figure of merit (FOM). Image quality parameters and effective dose were compared between acquisitions. Overall subjective image quality improved with the percentage of cases scored as adequate or higher increasing from 79 % to 92 % after implementation of ATVS (P = 0.03). SNR (14.1 ± 5.9, 15.7 ± 6.1, P = 0.009), CNR (11.6 ± 5.3, 13.2 ± 5.6, P = 0.011), and FOM (19.9 ± 23.3, 43.8 ± 51.1, P < 0.001) were significantly higher after implementation of ATVS. Mean image noise (24.1 ± 8.4 HU, 22.7 ± 7.1 HU, P = 0.048) and mean effective dose (10.6 ± 5.9 mSv, 8.8 ± 5.0 mSv, P = 0.003) were significantly lower after implementation of ATVS. Automated tube voltage selection can operator-independently optimize cardiovascular CTA image acquisition parameters with improved image quality at reduced dose. (orig.)

  20. An Automated System for the Detection and Diagnosis of Kidney Lesions in Children from Scintigraphy Images

    DEFF Research Database (Denmark)

    Landgren, Matilda; Sjöstrand, Karl; Ohlsson, Mattias;

    2011-01-01

    lesions in children and adolescents from 99mTc- DMSA scintigraphy images. We present the chain of analysis and provide a discussion of its performance. On a per-lesion basis, the classification reached an ROC-curve area of 0.96 (sensitivity/specificity e.g. 97%/85%) measured using an independent test...... group consisting of 56 patients with 730 candidate lesions. We conclude that the presented system for diagnostic support has the potential of increasing the quality of care regarding this type of examination....

  1. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    Science.gov (United States)

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can

  2. Automated Digital Image Analysis (TrichoScan®) for Human Hair Growth Analysis: Ease versus Errors

    OpenAIRE

    Saraogi, Punit P; Rachita S Dhurat

    2010-01-01

    Background: TrichoScan® is considered to be time-saving, easy to perform and consistent for quantifying hair loss/growth. Conflicting results of our study lead us to closely observe the image analysis, and certain repeated errors in the detection of hair were highlighted. Aims: To assess the utility of TrichoScan in quantification of diffuse hair loss in males with androgenetic alopecia (AGA) and females with diffuse telogen hair loss, with regard to total hair density (THD), telogen and vell...

  3. Automated detection system for pulmonary emphysema on 3D chest CT images

    Science.gov (United States)

    Hara, Takeshi; Yamamoto, Akira; Zhou, Xiangrong; Iwano, Shingo; Itoh, Shigeki; Fujita, Hiroshi; Ishigaki, Takeo

    2004-05-01

    An automatic extraction of pulmonary emphysema area on 3-D chest CT images was performed using an adaptive thresholding technique. We proposed a method to estimate the ratio of the emphysema area to the whole lung volume. We employed 32 cases (15 normal and 17 abnormal) which had been already diagnosed by radiologists prior to the study. The ratio in all the normal cases was less than 0.02, and in abnormal cases, it ranged from 0.01 to 0.26. The effectiveness of our approach was confirmed through the results of the present study.

  4. Automated Waterline Detection in the Wadden Sea Using High-Resolution TerraSAR-X Images

    OpenAIRE

    Stefan Wiehle; Susanne Lehner

    2015-01-01

    We present an algorithm for automatic detection of the land-water-line from TerraSAR-X images acquired over the Wadden Sea. In this coastal region of the southeastern North Sea, a strip of up to 20 km of seabed falls dry during low tide, revealing mudflats and tidal creeks. The tidal currents transport sediments and can change the coastal shape with erosion rates of several meters per month. This rate can be strongly increased by storm surges which also cause flooding of usually dry areas. Du...

  5. Bilateral image subtraction features for multivariate automated classification of breast cancer risk

    Science.gov (United States)

    Celaya-Padilla, Jose M.; Rodriguez-Rojas, Juan; Galván-Tejada, Jorge I.; Martínez-Torteya, Antonio; Treviño, Victor; Tamez-Peña, José G.

    2014-03-01

    Early tumor detection is key in reducing breast cancer deaths and screening mammography is the most widely available method for early detection. However, mammogram interpretation is based on human radiologist, whose radiological skills, experience and workload makes radiological interpretation inconsistent. In an attempt to make mammographic interpretation more consistent, computer aided diagnosis (CADx) systems has been introduced. This paper presents an CADx system aimed to automatically triage normal mammograms form suspicious mammograms. The CADx system co-reregister the left and breast images, then extracts image features from the co-registered mammographic bilateral sets. Finally, an optimal logistic multivariate model is generated by means of an evolutionary search engine. In this study, 440 subjects form the DDSM public data sets were used: 44 normal mammograms, 201 malignant mass mammograms, and 195 mammograms with malignant calci cations. The results showed a cross validation accuracy of 0.88 and an area under receiver operating characteristic (AUC) of 0.89 for the calci cations vs. normal mammograms. The optimal mass vs. normal mammograms model obtained an accuracy of 0.85 and an AUC of 0.88.

  6. [Value of automated medical indexing of an image database and a digital radiological library].

    Science.gov (United States)

    Duvauferrier, R; Le Beux, P; Pouliquen, B; Seka, L P; Morcet, N; Rolland, Y

    1997-06-01

    We indexed the contents of a radiology server on the web to facilitate access to research documents and to link reference texts to images contained in radiology databases. Indexation also allows case reports to be transformed with no supplementary work into formats compatible with computer-assisted training. Indexation was performed automatically by ADM-Index, the aim being to identify the medical concepts expressed within each medical text. Two types of texts were indexed: medical imaging reference books (Edicerf) and case reports with illustrations and captions (Iconocerf). These documents are now available on a web server with HTML format for Edicerf and on an Oracle database for Iconocerf. When the user consults a chapter of a book or a case report, the indexed terms are displayed in the heading; all reference texts and case reports containing the indexed terms can then be called up instantaneously. The user can express his search in natural language. Indexation follows the same process allowing instantaneous recall of all reference texts and case reports where the same concept appears in the diagnosis or clinical context. By using the context of the case reports as the search index, all case reports involving a common medical concept can be found. The context is interpreted as a question. When the user responds to this question, ADM-Index compares this response with the answer furnished by the reference texts and case reports. Correct or erroneous responses can thus be identified, converting the system into a computer-assisted training tool.

  7. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks.

    Science.gov (United States)

    Ertosun, Mehmet Günhan; Rubin, Daniel L

    2015-01-01

    Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository.

  8. Automated segmentation of MS lesions in FLAIR, DIR and T2-w MR images via an information theoretic approach

    Science.gov (United States)

    Hill, Jason E.; Matlock, Kevin; Pal, Ranadip; Nutter, Brian; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) is a vital tool in the diagnosis and characterization of multiple sclerosis (MS). MS lesions can be imaged with relatively high contrast using either Fluid Attenuated Inversion Recovery (FLAIR) or Double Inversion Recovery (DIR). Automated segmentation and accurate tracking of MS lesions from MRI remains a challenging problem. Here, an information theoretic approach to cluster the voxels in pseudo-colorized multispectral MR data (FLAIR, DIR, T2-weighted) is utilized to automatically segment MS lesions of various sizes and noise levels. The Improved Jump Method (IJM) clustering, assisted by edge suppression, is applied to the segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF) and MS lesions, if present, into a subset of slices determined to be the best MS lesion candidates via Otsu's method. From this preliminary clustering, the modal data values for the tissues can be determined. A Euclidean distance is then used to estimate the fuzzy memberships of each brain voxel for all tissue types and their 50/50 partial volumes. From these estimates, binary discrete and fuzzy MS lesion masks are constructed. Validation is provided by using three synthetic MS lesions brains (mild, moderate and severe) with labeled ground truths. The MS lesions of mild, moderate and severe designations were detected with a sensitivity of 83.2%, and 88.5%, and 94.5%, and with the corresponding Dice similarity coefficient (DSC) of 0.7098, 0.8739, and 0.8266, respectively. The effect of MRI noise is also examined by simulated noise and the application of a bilateral filter in preprocessing.

  9. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Directory of Open Access Journals (Sweden)

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  10. An Automated Approach to Passive Sonar Classification Using Binary Image Features

    Institute of Scientific and Technical Information of China (English)

    Vahid Vahidpour; Amir Rastegarnia; Azam Khalili

    2015-01-01

    This paper proposes a new method for ship recognition and classification using sound produced and radiated underwater. To do so, a three-step procedure is proposed. First, the preprocessing operations are utilized to reduce noise effects and provide signal for feature extraction. Second, a binary image, made from frequency spectrum of signal segmentation, is formed to extract effective features. Third, a neural classifier is designed to classify the signals. Two approaches, the proposed method and the fractal-based method are compared and tested on real data. The comparative results indicated better recognition ability and more robust performance of the proposed method than the fractal-based method. Therefore, the proposed method could improve the recognition accuracy of underwater acoustic targets.

  11. An effective automated system for grading severity of retinal arteriovenous nicking in colour retinal images.

    Science.gov (United States)

    Roy, Pallab Kanti; Nguyen, Uyen T V; Bhuiyan, Alauddin; Ramamohanarao, Kotagiri

    2014-01-01

    Retinal arteriovenous (AV) nicking is a precursor for hypertension, stroke and other cardiovascular diseases. In this paper, an effective method is proposed for the analysis of retinal venular widths to automatically classify the severity level of AV nicking. We use combination of intensity and edge information of the vein to compute its widths. The widths at various sections of the vein near the crossover point are then utilized to train a random forest classifier to classify the severity of AV nicking. We analyzed 47 color retinal images obtained from two population based studies for quantitative evaluation of the proposed method. We compare the detection accuracy of our method with a recently published four class AV nicking classification method. Our proposed method shows 64.51% classification accuracy in-contrast to the reported classification accuracy of 49.46% by the state of the art method. PMID:25571443

  12. An automated form of video image analysis applied to classification of movement disorders.

    Science.gov (United States)

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis. PMID:10661762

  13. Automated identification and location analysis of marked stem cells colonies in optical microscopy images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Paduano

    Full Text Available Embryonic stem cells (ESCs are characterized by two remarkable peculiarities: the capacity to propagate as undifferentiated cells (self-renewal and the ability to differentiate in ectoderm, endoderm, and mesoderm derivatives (pluripotency. Although the majority of ESCs divide without losing the pluripotency, it has become evident that ESC cultures consists of multiple cell populations highlighted by the expression of early germ lineage markers during spontaneous differentiation. Hence, the identification and characterization of ESCs subpopulations represents an efficient approach to improve the comprehension of correlation between gene expression and cell specification status. To study markers of ESCs heterogeneity, we developed an analysis pipeline which can automatically process images of stem cell colonies in optical microscopy. The question we try to address is to find out the statistically significant preferred locations of the marked cells. We tested our algorithm on a set of images of stem cell colonies to analyze the expression pattern of the Zscan4 gene, which was an elite candidate gene to be studied because it is specifically expressed in subpopulation of ESCs. To validate the proposed method we analyzed the behavior of control genes whose pattern had been associated to biological status such as differentiation (EndoA, pluripotency (Pou5f1, and pluripotency fluctuation (Nanog. We found that Zscan4 is not uniformly expressed inside a stem cell colony, and that it tends to be expressed towards the center of the colony, moreover cells expressing Zscan4 cluster each other. This is of significant importance because it allows us to hypothesize a biological status where the cells expressing Zscan4 are preferably associated to the inner of colonies suggesting pluripotent cell status features, and the clustering between themselves suggests either a colony paracrine effect or an early phase of cell specification through proliferation. Also, the

  14. Automation of aggregate characterization using laser profiling and digital image analysis

    Science.gov (United States)

    Kim, Hyoungkwan

    2002-08-01

    Particle morphological properties such as size, shape, angularity, and texture are key properties that are frequently used to characterize aggregates. The characteristics of aggregates are crucial to the strength, durability, and serviceability of the structure in which they are used. Thus, it is important to select aggregates that have proper characteristics for each specific application. Use of improper aggregate can cause rapid deterioration or even failure of the structure. The current standard aggregate test methods are generally labor-intensive, time-consuming, and subject to human errors. Moreover, important properties of aggregates may not be captured by the standard methods due to a lack of an objective way of quantifying critical aggregate properties. Increased quality expectations of products along with recent technological advances in information technology are motivating new developments to provide fast and accurate aggregate characterization. The resulting information can enable a real time quality control of aggregate production as well as lead to better design and construction methods of portland cement concrete and hot mix asphalt. This dissertation presents a system to measure various morphological characteristics of construction aggregates effectively. Automatic measurement of various particle properties is of great interest because it has the potential to solve such problems in manual measurements as subjectivity, labor intensity, and slow speed. The main efforts of this research are placed on three-dimensional (3D) laser profiling, particle segmentation algorithms, particle measurement algorithms, and generalized particle descriptors. First, true 3D data of aggregate particles obtained by laser profiling are transformed into digital images. Second, a segmentation algorithm and a particle measurement algorithm are developed to separate particles and process each particle data individually with the aid of various kinds of digital image

  15. Automation analysis of cardiac wall deformation from tagged magnetic resonance images; Analise automatica de deformacao do miocardio em imagens marcadas por ressonancia magnetica

    Energy Technology Data Exchange (ETDEWEB)

    Piva, R.M.V. [Hospital das Clinicas, Sao Paulo, SP (Brazil). Instituto do Coracao. Div. de Informatica; Kitney, R.I. [Imperial College of Science, Technology and Medicine, London (United Kingdom)

    1998-07-01

    Automation of cardiac wall deformation analysis from tagged magnetic resonance images (MRI) derives, basically, from the automatic detection of MR tags and left ventricle contours. In this work, it was adopted an approach based on image processing techniques and fuzzy logic to extract and classify image features as belonging to tags or ventricular borders. The use of fuzzy logic and IF-THEN rules, which involve image features such as length and curvature of valleys and gradients, allow the estimation of the membership of the pixels in the searched classes. The myocardial deformation is estimated in regions circumvented by contiguous tag intersections. The proposed method was applied to cine SPAMM (Spatial Modulation of Magnetization) short-axis images of the left ventricle obtained from human volunteers. (author)

  16. Bilateral Image Subtraction and Multivariate Models for the Automated Triaging of Screening Mammograms

    Directory of Open Access Journals (Sweden)

    José Celaya-Padilla

    2015-01-01

    Full Text Available Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. This work presents a computer-aided diagnosis (CADx method aimed to automatically triage mammogram sets. The method coregisters the left and right mammograms, extracts image features, and classifies the subjects into risk of having malignant calcifications (CS, malignant masses (MS, and healthy subject (HS. In this study, 449 subjects (197 CS, 207 MS, and 45 HS from a public database were used to train and evaluate the CADx. Percentile-rank (p-rank and z-normalizations were used. For the p-rank, the CS versus HS model achieved a cross-validation accuracy of 0.797 with an area under the receiver operating characteristic curve (AUC of 0.882; the MS versus HS model obtained an accuracy of 0.772 and an AUC of 0.842. For the z-normalization, the CS versus HS model achieved an accuracy of 0.825 with an AUC of 0.882 and the MS versus HS model obtained an accuracy of 0.698 and an AUC of 0.807. The proposed method has the potential to rank cases with high probability of malignant findings aiding in the prioritization of radiologists work list.

  17. Analytic Validation of the Automated Bone Scan Index as an Imaging Biomarker to Standardize Quantitative Changes in Bone Scans of Patients with Metastatic Prostate Cancer

    Science.gov (United States)

    Anand, Aseem; Morris, Michael J.; Kaboteh, Reza; Båth, Lena; Sadik, May; Gjertsson, Peter; Lomsky, Milan; Edenbrandt, Lars; Minarik, David; Bjartell, Anders

    2016-01-01

    A reproducible and quantitative imaging biomarker is needed to standardize the evaluation of changes in bone scans of prostate cancer patients with skeletal metastasis. We performed a series of analytic validation studies to evaluate the performance of the automated bone scan index (BSI) as an imaging biomarker in patients with metastatic prostate cancer. Methods Three separate analytic studies were performed to evaluate the accuracy, precision, and reproducibility of the automated BSI. Simulation study: bone scan simulations with predefined tumor burdens were created to assess accuracy and precision. Fifty bone scans were simulated with a tumor burden ranging from low to high disease confluence (0.10–13.0 BSI). A second group of 50 scans was divided into 5 subgroups, each containing 10 simulated bone scans, corresponding to BSI values of 0.5, 1.0, 3.0, 5.0, and 10.0. Repeat bone scan study: to assess the reproducibility in a routine clinical setting, 2 repeat bone scans were obtained from metastatic prostate cancer patients after a single 600-MBq 99mTc-methylene diphosphonate injection. Follow-up bone scan study: 2 follow-up bone scans of metastatic prostate cancer patients were analyzed to determine the interobserver variability between the automated BSIs and the visual interpretations in assessing changes. The automated BSI was generated using the upgraded EXINI boneBSI software (version 2). The results were evaluated using linear regression, Pearson correlation, Cohen κ measurement, coefficient of variation, and SD. Results Linearity of the automated BSI interpretations in the range of 0.10–13.0 was confirmed, and Pearson correlation was observed at 0.995 (n = 50; 95% confidence interval, 0.99–0.99; P cancer. PMID:26315832

  18. Semi-automated measurements of heart-to-mediastinum ratio on 123I-MIBG myocardial scintigrams by using image fusion method with chest X-ray images

    Science.gov (United States)

    Kawai, Ryosuke; Hara, Takeshi; Katafuchi, Tetsuro; Ishihara, Tadahiko; Zhou, Xiangrong; Muramatsu, Chisako; Abe, Yoshiteru; Fujita, Hiroshi

    2015-03-01

    MIBG (iodine-123-meta-iodobenzylguanidine) is a radioactive medicine that is used to help diagnose not only myocardial diseases but also Parkinson's diseases (PD) and dementia with Lewy Bodies (DLB). The difficulty of the segmentation around the myocardium often reduces the consistency of measurement results. One of the most common measurement methods is the ratio of the uptake values of the heart to mediastinum (H/M). This ratio will be a stable independent of the operators when the uptake value in the myocardium region is clearly higher than that in background, however, it will be unreliable indices when the myocardium region is unclear because of the low uptake values. This study aims to develop a new measurement method by using the image fusion of three modalities of MIBG scintigrams, 201-Tl scintigrams, and chest radiograms, to increase the reliability of the H/M measurement results. Our automated method consists of the following steps: (1) construct left ventricular (LV) map from 201-Tl myocardium image database, (2) determine heart region in chest radiograms, (3) determine mediastinum region in chest radiograms, (4) perform image fusion of chest radiograms and MIBG scintigrams, and 5) perform H/M measurements on MIBG scintigrams by using the locations of heart and mediastinum determined on the chest radiograms. We collected 165 cases with 201-Tl scintigrams and chest radiograms to construct the LV map. Another 65 cases with MIBG scintigrams and chest radiograms were also collected for the measurements. Four radiological technologists (RTs) manually measured the H/M in the MIBG images. We compared the four RTs' results with our computer outputs by using Pearson's correlation, the Bland-Altman method, and the equivalency test method. As a result, the correlations of the H/M between four the RTs and the computer were 0.85 to 0.88. We confirmed systematic errors between the four RTs and the computer as well as among the four RTs. The variation range of the H

  19. Image quality in non-gated versus gated reconstruction of tongue motion using magnetic resonance imaging: a comparison using automated image processing

    International Nuclear Information System (INIS)

    The use of gated or ECG triggered MR is a well-established technique and developments in coil technology have enabled this approach to be applied to areas other than the heart. However, the image quality of gated (ECG or cine) versus non-gated or real-time has not been extensively evaluated in the mouth. We evaluate two image sequences by developing an automatic image processing technique which compares how well the image represents known anatomy. Four subjects practised experimental poly-syllabic sentences prior to MR scanning. Using a 1.5 T MR unit, we acquired comparable gated (using an artificial trigger) and non-gated sagittal images during speech. We then used an image processing algorithm to model the image grey along lines that cross the airway. Each line involved an eight parameter non-linear equation to model of proton densities, edges, and dimensions. Gated and non-gated images show similar spatial resolution, with non-gated images being slightly sharper (10% better resolution, less than 1 pixel). However, the gated sequences generated images of substantially lower inherent noise, and substantially better discrimination between air and tissue. Additionally, the gated sequences demonstrate a very much greater temporal resolution. Overall, image quality is better with gated imaging techniques, especially given their superior temporal resolution. Gated techniques are limited by the repeatability of the motions involved, and we have shown that speech to a metronome can be sufficiently repeatable to allow high-quality gated magnetic resonance imaging images. We suggest that gated sequences may be useful for evaluating other types of repetitive movement involving the joints and limb motions. (orig.)

  20. A method to quantify movement activity of groups of animals using automated image analysis

    Science.gov (United States)

    Xu, Jianyu; Yu, Haizhen; Liu, Ying

    2009-07-01

    Most physiological and environmental changes are capable of inducing variations in animal behavior. The behavioral parameters have the possibility to be measured continuously in-situ by a non-invasive and non-contact approach, and have the potential to be used in the actual productions to predict stress conditions. Most vertebrates tend to live in groups, herds, flocks, shoals, bands, packs of conspecific individuals. Under culture conditions, the livestock or fish are in groups and interact on each other, so the aggregate behavior of the group should be studied rather than that of individuals. This paper presents a method to calculate the movement speed of a group of animal in a enclosure or a tank denoted by body length speed that correspond to group activity using computer vision technique. Frame sequences captured at special time interval were subtracted in pairs after image segmentation and identification. By labeling components caused by object movement in difference frame, the projected area caused by the movement of every object in the capture interval was calculated; this projected area was divided by the projected area of every object in the later frame to get body length moving distance of each object, and further could obtain the relative body length speed. The average speed of all object can well respond to the activity of the group. The group activity of a tilapia (Oreochromis niloticus) school to high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were quantified based on these methods. High UIA level condition elicited a marked increase in school activity at the first hour (P<0.05) exhibiting an avoidance reaction (trying to flee from high UIA condition), and then decreased gradually.

  1. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    International Nuclear Information System (INIS)

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  −0.6   ±   2.3° and  −1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment. (paper)

  2. Automated hotspot analysis with aerial image CD metrology for advanced logic devices

    Science.gov (United States)

    Buttgereit, Ute; Trautzsch, Thomas; Kim, Min-ho; Seo, Jung-Uk; Yoon, Young-Keun; Han, Hak-Seung; Chung, Dong Hoon; Jeon, Chan-Uk; Meyers, Gary

    2014-09-01

    Continuously shrinking designs by further extension of 193nm technology lead to a much higher probability of hotspots especially for the manufacturing of advanced logic devices. The CD of these potential hotspots needs to be precisely controlled and measured on the mask. On top of that, the feature complexity increases due to high OPC load in the logic mask design which is an additional challenge for CD metrology. Therefore the hotspot measurements have been performed on WLCD from ZEISS, which provides the benefit of reduced complexity by measuring the CD in the aerial image and qualifying the printing relevant CD. This is especially of advantage for complex 2D feature measurements. Additionally, the data preparation for CD measurement becomes more critical due to the larger amount of CD measurements and the increasing feature diversity. For the data preparation this means to identify these hotspots and mark them automatically with the correct marker required to make the feature specific CD measurement successful. Currently available methods can address generic pattern but cannot deal with the pattern diversity of the hotspots. The paper will explore a method how to overcome those limitations and to enhance the time-to-result in the marking process dramatically. For the marking process the Synopsys WLCD Output Module was utilized, which is an interface between the CATS mask data prep software and the WLCD metrology tool. It translates the CATS marking directly into an executable WLCD measurement job including CD analysis. The paper will describe the utilized method and flow for the hotspot measurement. Additionally, the achieved results on hotspot measurements utilizing this method will be presented.

  3. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    Science.gov (United States)

    Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul

    2015-09-01

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  -0.6   ±   2.3° and  -1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.

  4. Use of an automated digital images system for detecting plant status changes in response to climate change manipulations

    Science.gov (United States)

    Cesaraccio, Carla; Piga, Alessandra; Ventura, Andrea; Arca, Angelo; Duce, Pierpaolo

    2014-05-01

    The importance of phenological research for understanding the consequences of global environmental change on vegetation is highlighted in the most recent IPCC reports. Collecting time series of phenological events appears to be of crucial importance to better understand how vegetation systems respond to climatic regime fluctuations, and, consequently, to develop effective management and adaptation strategies. However, traditional monitoring of phenology is labor intensive and costly and affected to a certain degree of subjective inaccuracy. Other methods used to quantify the seasonal patterns of vegetation development are based on satellite remote sensing (land surface phenology) but they operate at coarse spatial and temporal resolution. To overcome the issues of these methodologies different approaches for vegetation monitoring based on "near-surface" remote sensing have been proposed in recent researches. In particular, the use of digital cameras has become more common for phenological monitoring. Digital images provide spectral information in the red, green, and blue (RGB) wavelengths. Inflection points in seasonal variations of intensities of each color channel can be used to identify phenological events. Canopy green-up phenology can be quantified from the greenness indices. Species-specific dates of leaf emergence can be estimated by RGB image analyses. In this research, an Automated Phenological Observation System (APOS), based on digital image sensors, was used for monitoring the phenological behavior of shrubland species in a Mediterranean site. The system was developed under the INCREASE (an Integrated Network on Climate Change Research) EU-funded research infrastructure project, which is based upon large scale field experiments with non-intrusive climatic manipulations. Monitoring of phenological behavior was conducted continuously since October 2012. The system was set to acquire one panorama per day at noon which included three experimental plots for

  5. Automated Methods Of Corrosion Measurements

    DEFF Research Database (Denmark)

    Bech-Nielsen, Gregers; Andersen, Jens Enevold Thaulov; Reeve, John Ch;

    1997-01-01

    The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell.......The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell....

  6. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    Directory of Open Access Journals (Sweden)

    Paolo Menesatti

    2009-10-01

    Full Text Available The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan was analysed. Out of 150,000 frames (1 per 4 s, a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts, red crabs (Paralomis multispina, and snails (Buccinum soyomaruae. Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  7. Evaluation of software tools for automated identification of neuroanatomical structures in quantitative β-amyloid PET imaging to diagnose Alzheimer's disease

    Energy Technology Data Exchange (ETDEWEB)

    Tuszynski, Tobias; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Seese, Anita; Barthel, Henryk [Leipzig University Medical Centre, Department of Nuclear Medicine, Leipzig (Germany); Rullmann, Michael; Hesse, Swen; Sabri, Osama [Leipzig University Medical Centre, Department of Nuclear Medicine, Leipzig (Germany); Leipzig University Medical Centre, Integrated Treatment and Research Centre (IFB) Adiposity Diseases, Leipzig (Germany); Gertz, Hermann-Josef [Leipzig University Medical Centre, Department of Psychiatry, Leipzig (Germany); Lobsien, Donald [Leipzig University Medical Centre, Department of Neuroradiology, Leipzig (Germany)

    2016-06-15

    For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis. (orig.)

  8. Optimization of automated radiosynthesis of [{sup 18}F]AV-45: a new PET imaging agent for Alzheimer's disease

    Energy Technology Data Exchange (ETDEWEB)

    Liu Yajing; Zhu Lin [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States); Ploessl, Karl [Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States); Choi, Seok Rye [Avid Radiopharmaceuticals Inc., Philadelphia, PA 19014 (United States); Qiao Hongwen; Sun Xiaotao; Li Song [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Zha Zhihao [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States); Kung, Hank F., E-mail: kunghf@sunmac.spect.upenn.ed [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States)

    2010-11-15

    Introduction: Accumulation of {beta}-amyloid (A{beta}) aggregates in the brain is linked to the pathogenesis of Alzheimer's disease (AD). Imaging probes targeting these A{beta} aggregates in the brain may provide a useful tool to facilitate the diagnosis of AD. Recently, [{sup 18}F]AV-45 ([{sup 18}F]5) demonstrated high binding to the A{beta} aggregates in AD patients. To improve the availability of this agent for widespread clinical application, a rapid, fully automated, high-yield, cGMP-compliant radiosynthesis was necessary for production of this probe. We report herein an optimal [{sup 18}F]fluorination, de-protection condition and fully automated radiosynthesis of [{sup 18}F]AV-45 ([{sup 18}F]5) on a radiosynthesis module (BNU F-A2). Methods: The preparation of [{sup 18}F]AV-45 ([{sup 18}F]5) was evaluated under different conditions, specifically by employing different precursors (-OTs and -Br as the leaving group), reagents (K222/K{sub 2}CO{sub 3} vs. tributylammonium bicarbonate) and deprotection in different acids. With optimized conditions from these experiments, the automated synthesis of [{sup 18}F]AV-45 ([{sup 18}F]5) was accomplished by using a computer-programmed, standard operating procedure, and was purified on an on-line solid-phase cartridge (Oasis HLB). Results: The optimized reaction conditions were successfully implemented to an automated nucleophilic fluorination module. The radiochemical purity of [{sup 18}F]AV-45 ([{sup 18}F]5) was >95%, and the automated synthesis yield was 33.6{+-}5.2% (no decay corrected, n=4), 50.1{+-}7.9% (decay corrected) in 50 min at a quantity level of 10-100 mCi (370-3700 MBq). Autoradiography studies of [{sup 18}F]AV-45 ([{sup 18}F]5) using postmortem AD brain and Tg mouse brain sections in the presence of different concentration of 'cold' AV-136 showed a relatively low inhibition of in vitro binding of [{sup 18}F]AV-45 ([{sup 18}F]5) to the A{beta} plaques (IC50=1-4 {mu}M, a concentration several

  9. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  10. Strong Prognostic Value of Tumor-infiltrating Neutrophils and Lymphocytes Assessed by Automated Digital Image Analysis in Early Stage Cervical Cancer

    DEFF Research Database (Denmark)

    Carus, Andreas; Donskov, Frede; Switten Nielsen, Patricia;

    2014-01-01

    INTRODUCTION Manual observer-assisted stereological (OAS) assessments of tumor-infiltrating neutrophils and lymphocytes are prognostic, accurate, but cumbersome. We assessed the applicability of automated digital image analysis (DIA). METHODS Visiomorph software was used to obtain DIA densities...... to lymphocyte (TA–NL) index accurately predicted the risk of relapse, ranging from 8% to 52% (P = 0.001). CONCLUSIONS DIA is a potential assessment technique. The TA–NL index obtained by DIA is a strong prognostic variable with possible routine clinical application....

  11. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non

  12. Automated detection of hepatotoxic compounds in human hepatocytes using HepaRG cells and image-based analysis of mitochondrial dysfunction with JC-1 dye

    International Nuclear Information System (INIS)

    In this study, our goal was to develop an efficient in situ test adapted to screen hepatotoxicity of various chemicals, a process which remains challenging during the early phase of drug development. The test was based on functional human hepatocytes using the HepaRG cell line, and automation of quantitative fluorescence microscopy coupled with automated imaging analysis. Differentiated HepaRG cells express most of the specific liver functions at levels close to those found in primary human hepatocytes, including detoxifying enzymes and drug transporters. A triparametric analysis was first used to evaluate hepatocyte purity and differentiation status, mainly detoxication capacity of cells before toxicity testing. We demonstrated that culturing HepaRG cells at high density maintained high hepatocyte purity and differentiation level. Moreover, evidence was found that isolating hepatocytes from 2-week-old confluent cultures limited variations associated with an ageing process occurring over time in confluent cells. Then, we designed a toxicity test based on detection of early mitochondrial depolarisation associated with permeability transition (MPT) pore opening, using JC-1 as a metachromatic fluorescent dye. Maximal dye dimerization that would have been strongly hampered by efficient efflux due to the active, multidrug-resistant (MDR) pump was overcome by coupling JC-1 with the MDR inhibitor verapamil. Specificity of this test was demonstrated and its usefulness appeared directly dependent on conditions supporting hepatic cell competence. This new hepatotoxicity test adapted to automated, image-based detection should be useful to evaluate the early MPT event common to cell apoptosis and necrosis and simultaneously to detect involvement of the multidrug resistant pump with target drugs in a human hepatocyte environment. - Highlights: → We define conditions to preserve differentiation of selective pure HepaRG hepatocyte cultures. → In these conditions, CYPs

  13. Fully automated quantification of regional cerebral blood flow with three-dimensional stereotaxic region of interest template. Validation using magnetic resonance imaging. Technical note

    Energy Technology Data Exchange (ETDEWEB)

    Takeuchi, Ryo; Katayama, Shigenori; Takeda, Naoya; Fujita, Katsuzo [Nishi-Kobe Medical Center (Japan); Yonekura, Yoshiharu [Fukui Medical Univ., Matsuoka (Japan); Konishi, Junji [Kyoto Univ. (Japan). Graduate School of Medicine

    2003-03-01

    The previously reported three-dimensional stereotaxic region of interest (ROI) template (3DSRT-t) for the analysis of anatomically standardized technetium-99m-L,L-ethyl cysteinate dimer ({sup 99m}Tc-ECD) single photon emission computed tomography (SPECT) images was modified for use in a fully automated regional cerebral blood flow (rCBF) quantification software, 3DSRT, incorporating an anatomical standardization engine transplanted from statistical parametric mapping 99 and ROIs for quantification based on 3DSRT-t. Three-dimensional T{sub 2}-weighted magnetic resonance images of 10 patients with localized infarcted areas were compared with the ROI contour of 3DSRT, and the positions of the central sulcus in the primary sensorimotor area were also estimated. All positions of the 20 lesions were in strict accordance with the ROI delineation of 3DSRT. The central sulcus was identified on at least one side of 210 paired ROIs and in the middle of 192 (91.4%) of these 210 paired ROIs among the 273 paired ROIs of the primary sensorimotor area. The central sulcus was recognized in the middle of more than 71.4% of the ROIs in which the central sulcus was identifiable in the respective 28 slices of the primary sensorimotor area. Fully automated accurate ROI delineation on anatomically standardized images is possible with 3DSRT, which enables objective quantification of rCBF and vascular reserve in only a few minutes using {sup 99m}Tc-ECD SPECT images obtained by the resting and vascular reserve (RVR) method. (author)

  14. Simplified Automated Image Analysis for Detection and Phenotyping of Mycobacterium tuberculosis on Porous Supports by Monitoring Growing Microcolonies

    OpenAIRE

    den Hertog, Alice L.; Dennis W Visser; Ingham, Colin J.; Frank H A G Fey; Paul R Klatser; Anthony, Richard M.

    2010-01-01

    BACKGROUND: Even with the advent of nucleic acid (NA) amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS), as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tubercul...

  15. Espina: A Tool for the Automated Segmentation and Counting of Synapses in Large Stacks of Electron Microscopy Images

    Science.gov (United States)

    Morales, Juan; Alonso-Nanclares, Lidia; Rodríguez, José-Rodrigo; DeFelipe, Javier; Rodríguez, Ángel; Merchán-Pérez, Ángel

    2011-01-01

    The synapses in the cerebral cortex can be classified into two main types, Gray's type I and type II, which correspond to asymmetric (mostly glutamatergic excitatory) and symmetric (inhibitory GABAergic) synapses, respectively. Hence, the quantification and identification of their different types and the proportions in which they are found, is extraordinarily important in terms of brain function. The ideal approach to calculate the number of synapses per unit volume is to analyze 3D samples reconstructed from serial sections. However, obtaining serial sections by transmission electron microscopy is an extremely time consuming and technically demanding task. Using focused ion beam/scanning electron microscope microscopy, we recently showed that virtually all synapses can be accurately identified as asymmetric or symmetric synapses when they are visualized, reconstructed, and quantified from large 3D tissue samples obtained in an automated manner. Nevertheless, the analysis, segmentation, and quantification of synapses is still a labor intensive procedure. Thus, novel solutions are currently necessary to deal with the large volume of data that is being generated by automated 3D electron microscopy. Accordingly, we have developed ESPINA, a software tool that performs the automated segmentation and counting of synapses in a reconstructed 3D volume of the cerebral cortex, and that greatly facilitates and accelerates these processes. PMID:21633491

  16. ESPINA: a tool for the automated segmentation and counting of synapses in large stacks of electron microscopy images

    Directory of Open Access Journals (Sweden)

    Juan eMorales

    2011-03-01

    Full Text Available The synapses in the cerebral cortex can be classified into two main types, Gray’s type I and type II, which correspond to asymmetric (mostly glutamatergic excitatory and symmetric (inhibitory GABAergic synapses, respectively. Hence, the quantification and identification of their different types and the proportions in which they are found, is extraordinarily important in terms of brain function. The ideal approach to calculate the number of synapses per unit volume is to analyze three-dimensional samples reconstructed from serial sections. However, obtaining serial sections by transmission electron microscopy is an extremely time consuming and technically demanding task. Using FIB/SEM microscopy, we recently showed that virtually all synapses can be accurately identified as asymmetric or symmetric synapses when they are visualized, reconstructed and quantified from large three-dimensional tissue samples obtained in an automated manner. Nevertheless, the analysis, segmentation and quantification of synapses is still a labor intensive procedure. Thus, novel solutions are currently necessary to deal with the large volume of data that is being generated by automated 3D electron microscopy. Accordingly, we have developed ESPINA, a software tool that performs the automated segmentation and counting of synapses in a reconstructed 3D volume of the cerebral cortex, and that greatly facilitates and accelerates these processes.

  17. Automated vector selection of SIVQ and parallel computing integration MATLAB TM : Innovations supporting large-scale and high-throughput image analysis studies

    Directory of Open Access Journals (Sweden)

    Jerome Cheng

    2011-01-01

    Full Text Available Introduction: Spatially invariant vector quantization (SIVQ is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector′s sensitivity and specificity properties (typically by reviewing a resultant heat map. In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. Methods: An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC transfer function, with each assessment resulting in an associated area-under-the-curve (AUC figure of merit. Results: Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an

  18. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  19. SU-E-J-92: Validating Dose Uncertainty Estimates Produced by AUTODIRECT, An Automated Program to Evaluate Deformable Image Registration Accuracy

    International Nuclear Information System (INIS)

    Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then, AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance

  20. A semi-automated method for non-invasive internal organ weight estimation by post-mortem magnetic resonance imaging in fetuses, newborns and children

    Energy Technology Data Exchange (ETDEWEB)

    Thayyil, Sudhin [Centre for Cardiovascular MR, UCL Institute of Child Health and Great Ormond Street Hospital for Children, London (United Kingdom)], E-mail: s.thayyil@ich.ucl.ac.uk; Schievano, Silvia [Centre for Cardiovascular MR, UCL Institute of Child Health and Great Ormond Street Hospital for Children, London (United Kingdom); Robertson, Nicola J. [EGA UCL Institute for Women' s Health, University College London (United Kingdom); Jones, Rodney [Centre for Cardiovascular MR, UCL Institute of Child Health and Great Ormond Street Hospital for Children, London (United Kingdom); Chitty, Lyn S. [EGA UCL Institute for Women' s Health, University College London (United Kingdom); Clinical Molecular Genetics Unit, Institute of Child Health, London (United Kingdom); Sebire, Neil J. [Histopathology, Great Ormond Street Hospital NHS Trust, London (United Kingdom); Taylor, Andrew M. [Centre for Cardiovascular MR, UCL Institute of Child Health and Great Ormond Street Hospital for Children, London (United Kingdom)

    2009-11-15

    Magnetic resonance (MR) imaging allows minimally invasive autopsy, especially when consent is declined for traditional autopsy. Estimation of individual visceral organ weights is an important component of traditional autopsy. Objective: To examine whether a semi-automated can be used for non-invasive internal organ weight measurement using post-mortem MR imaging in fetuses, newborns and children. Methods: Phase 1: In vitro scanning of 36 animal organs (heart, liver, kidneys) was performed to check the accuracy of volume reconstruction methodology. Real volumes were measured by water displacement method. Phase 2: Sixty-five whole body post-mortem MR scans were performed in fetuses (n = 30), newborns (n = 5) and children (n = 30) at 1.5 T using a 3D TSE T2-weighted sequence. These data were analysed offline using the image processing software Mimics 11.0. Results: Phase 1: Mean difference (S.D.) between estimated and actual volumes were -0.3 (1.5) ml for kidney, -0.7 (1.3) ml for heart, -1.7 (3.6) ml for liver in animal experiments. Phase 2: In fetuses, newborns and children mean differences between estimated and actual weights (S.D.) were -0.6 (4.9) g for liver, -5.1 (1.2) g for spleen, -0.3 (0.6) g for adrenals, 0.4 (1.6) g for thymus, 0.9 (2.5) g for heart, -0.7 (2.4) g for kidneys and 2.7 (14) g for lungs. Excellent co-correlation was noted for estimated and actual weights (r{sup 2} = 0.99, p < 0.001). Accuracy was lower when fetuses were less than 20 weeks or less than 300 g. Conclusion: Rapid, accurate and reproducible estimation of solid internal organ weights is feasible using the semi-automated 3D volume reconstruction method.

  1. Automated pipeline to analyze non-contact infrared images of the paraventricular nucleus specific leptin receptor knock-out mouse model

    Science.gov (United States)

    Diaz Martinez, Myriam; Ghamari-Langroudi, Masoud; Gifford, Aliya; Cone, Roger; Welch, E. B.

    2015-03-01

    Evidence of leptin resistance is indicated by elevated leptin levels together with other hallmarks of obesity such as a defect in energy homeostasis.1 As obesity is an increasing epidemic in the US, the investigation of mechanisms by which leptin resistance has a pathophysiological impact on energy is an intensive field of research.2 However, the manner in which leptin resistance contributes to the dysregulation of energy, specifically thermoregulation,3 is not known. The aim of this study was to investigate whether the leptin receptor expressed in paraventricular nucleus (PVN) neurons plays a role in thermoregulation at different temperatures. Non-contact infrared (NCIR) thermometry was employed to measure surface body temperature (SBT) of nonanesthetized mice with a specific deletion of the leptin receptor in the PVN after exposure to room (25 °C) and cold (4 °C) temperature. Dorsal side infrared images of wild type (LepRwtwt/sim1-Cre), heterozygous (LepRfloxwt/sim1-Cre) and knock-out (LepRfloxflox/sim1-Cre) mice were collected. Images were input to an automated post-processing pipeline developed in MATLAB to calculate average and maximum SBTs. Linear regression was used to evaluate the relationship between sex, cold exposure and leptin genotype with SBT measurements. Findings indicate that average SBT has a negative relationship to the LepRfloxflox/sim1-Cre genotype, the female sex and cold exposure. However, max SBT is affected by the LepRfloxflox/sim1-Cre genotype and the female sex. In conclusion this data suggests that leptin within the PVN may have a neuroendocrine role in thermoregulation and that NCIR thermometry combined with an automated imaging-processing pipeline is a promising approach to determine SBT in non-anesthetized mice.

  2. Automating measurement of subtle changes in articular cartilage from MRI of the knee by combining 3D image registration and segmentation

    Science.gov (United States)

    Lynch, John A.; Zaim, Souhil; Zhao, Jenny; Peterfy, Charles G.; Genant, Harry K.

    2001-07-01

    In osteoarthritis, articular cartilage loses integrity and becomes thinned. This usually occurs at sites which bear weight during normal use. Measurement of such loss from MRI scans, requires precise and reproducible techniques, which can overcome the difficulties of patient repositioning within the scanner. In this study, we combine a previously described technique for segmentation of cartilage from MRI of the knee, with a technique for 3D image registration that matches localized regions of interest at followup and baseline. Two patients, who had recently undergone meniscal surgery, and developed lesions during the 12 month followup period were examined. Image registration matched regions of interest (ROI) between baseline and followup, and changes within the cartilage lesions were estimate to be about a 16% reduction in cartilage volume within each ROI. This was more than 5 times the reproducibility of the measurement, but only represented a change of between 1 and 2% in total femoral cartilage volume. Changes in total cartilage volume may be insensitive for quantifying changes in cartilage morphology. A combined used of automated image segmentation, with 3D image registration could be a useful tool for the precise and sensitive measurement of localized changes in cartilage from MRI of the knee.

  3. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  4. Process automation

    International Nuclear Information System (INIS)

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  5. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  6. GMP-compliant automated synthesis of [{sup 18}F]AV-45 (Florbetapir F 18) for imaging {beta}-amyloid plaques in human brain

    Energy Technology Data Exchange (ETDEWEB)

    Yao, C.-H. [Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taiwan (China); Lin, K.-J. [Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taiwan (China); Department of Medical Imaging and Radiological Sciences, Chang Gung University, 259 Wen-Hua 1st Road, Kweishan, Taoyuan 333, Taiwan (China); Weng, C.-C. [Department of Medical Imaging and Radiological Sciences, Chang Gung University, 259 Wen-Hua 1st Road, Kweishan, Taoyuan 333, Taiwan (China); Hsiao, I.-T. [Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taiwan (China); Department of Medical Imaging and Radiological Sciences, Chang Gung University, 259 Wen-Hua 1st Road, Kweishan, Taoyuan 333, Taiwan (China); Ting, Y.-S. [Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taiwan (China); Yen, T.-C. [Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taiwan (China); Department of Medical Imaging and Radiological Sciences, Chang Gung University, 259 Wen-Hua 1st Road, Kweishan, Taoyuan 333, Taiwan (China); Jan, T.-R. [Department and Graduate Institute of Veterinary Medicine, National Taiwan University, Taipei, Taiwan (China); Skovronsky, Daniel [Avid Radiopharmaceuticals, Inc., Philadelphia, PA 19104 (United States); Kung, M.-P. [Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taiwan (China); Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 (United States); Wey, S.-P., E-mail: spwey@mail.cgu.edu.t [Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taiwan (China); Department of Medical Imaging and Radiological Sciences, Chang Gung University, 259 Wen-Hua 1st Road, Kweishan, Taoyuan 333, Taiwan (China)

    2010-12-15

    We report herein the Good Manufacturing Practice (GMP)-compliant automated synthesis of {sup 18}F-labeled styrylpyridine, AV-45 (Florbetapir), a novel tracer for positron emission tomography (PET) imaging of {beta}-amyloid (A{beta}) plaques in the brain of Alzheimer's disease patients. [{sup 18}F]AV-45 was prepared in 105 min using a tosylate precursor with Sumitomo modules for radiosynthesis under GMP-compliant conditions. The overall yield was 25.4{+-}7.7% with a final radiochemical purity of 95.3{+-}2.2% (n=19). The specific activity of [{sup 18}F]AV-45 reached as high as 470{+-}135 TBq/mmol (n=19). The present studies show that [{sup 18}F]AV-45 can be manufactured under GMP-compliant conditions and could be widely available for routine clinical use.

  7. A fully automated two-step synthesis of an {sup 18}F-labelled tyrosine kinase inhibitor for EGFR kinase activity imaging in tumors

    Energy Technology Data Exchange (ETDEWEB)

    Kobus, D.; Giesen, Y.; Ullrich, R.; Backes, H. [Max Planck Institute for Neurological Research with Klaus-Joachim-Zuelch Laboratories of the Max Planck Society and the Faculty of Medicine of the University of Cologne, Cologne (Germany); Neumaier, B. [Max Planck Institute for Neurological Research with Klaus-Joachim-Zuelch Laboratories of the Max Planck Society and the Faculty of Medicine of the University of Cologne, Cologne (Germany)], E-mail: bernd.neumaier@nf.mpg.de

    2009-11-15

    Radiolabelled epidermal growth factor receptor (EGFR) tyrosine kinase (TK) inhibitors potentially facilitate the assessment of EGFR overexpression in tumors. Since elaborate multi-step radiosyntheses are required for {sup 18}F-labelling of EGFR-specific anilinoquinazolines we report on the development of a two-step click labelling approach that was adapted to a fully automated synthesis module. 6-(4-N,N-Dimethylaminocrotonyl)amido-4-(3-chloro-4-fluoro)phenylamino-7-{l_brace}3- [4-(2-[{sup 18}F]fluoroethyl)-2,3,4-triazol-1-yl]propoxy{r_brace}quinazoline ([{sup 18}F]6) was synthesized via Huisgen 1,3-dipolar cycloaddition between 2-[{sup 18}F]fluoroethylazide ([{sup 18}F]4) and the alkyne modified anilinoquinazoline precursor 5. PET images of PC9 tumor xenograft using the novel biomarker showed promising results to visualize EGFR overexpression.

  8. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer

    Science.gov (United States)

    Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-01

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  9. Robust Automated Image Co-Registration of Optical Multi-Sensor Time Series Data: Database Generation for Multi-Temporal Landslide Detection

    Directory of Open Access Journals (Sweden)

    Robert Behling

    2014-03-01

    Full Text Available Reliable multi-temporal landslide detection over longer periods of time requires multi-sensor time series data characterized by high internal geometric stability, as well as high relative and absolute accuracy. For this purpose, a new methodology for fully automated co-registration has been developed allowing efficient and robust spatial alignment of standard orthorectified data products originating from a multitude of optical satellite remote sensing data of varying spatial resolution. Correlation-based co-registration uses world-wide available terrain corrected Landsat Level 1T time series data as the spatial reference, ensuring global applicability. The developed approach has been applied to a multi-sensor time series of 592 remote sensing datasets covering an approximately 12,000 km2 area in Southern Kyrgyzstan (Central Asia strongly affected by landslides. The database contains images acquired during the last 26 years by Landsat (ETM, ASTER, SPOT and RapidEye sensors. Analysis of the spatial shifts obtained from co-registration has revealed sensor-specific alignments ranging between 5 m and more than 400 m. Overall accuracy assessment of these alignments has resulted in a high relative image-to-image accuracy of 17 m (RMSE and a high absolute accuracy of 23 m (RMSE for the whole co-registered database, making it suitable for multi-temporal landslide detection at a regional scale in Southern Kyrgyzstan.

  10. PET imaging of liposomes labeled with an [¹⁸F]-fluorocholesteryl ether probe prepared by automated radiosynthesis.

    Science.gov (United States)

    Jensen, Andreas Tue Ingemann; Binderup, Tina; Andresen, Thomas L; Kjær, Andreas; Rasmussen, Palle H

    2012-12-01

    A novel [¹⁸F]-labeled cholesteryl ether lipid probe was prepared by synthesis of the corresponding mesylate, which was [¹⁸F]-fluorinated by a [¹⁸F]KF, Kryptofix-222, K₂CO₃ procedure. Fluorination was done for 10 minutes at 165°C and took place with conversion between 3 and 17%, depending on conditions. Radiolabelling of the probe and subsequent in situ purification on SEP-Paks were done on a custom-built, fully automatic synthesis robot. Long-circulating liposomes were prepared by hydration (magnetic stirring) of a lipid film containing the radiolabeled probe, followed by fully automated extrusion through 100-nm filters. The [¹⁸F]-labeled liposomes were injected into nude, tumor-bearing mice, and positron emission tomography (PET) scans were performed several times over 8 hours to investigate the in vivo biodistribution. Clear tumor accumulation, as well as hepatic and splenic uptake, was observed, corresponding to expected liposomal pharmacokinetics. The tumor accumulation 8 hours postinjection accounted for 2.25 ± 0.23 (mean ± standard error of the mean) percent of injected dose per gram (%ID/g), and the tumor-to-muscle ratio reached 2.20 ± 0.24 after 8 hours, which is satisfactorily high for visualization of pathological lesions. Moreover, the blood concentration was still at a high level (13.9 ± 1.5 %ID/g) at the end of the 8-hour time frame. The present work demonstrates the methodology for automated preparation of radiolabeled liposomes, and shows that [¹⁸F]-labeled liposomes could be suitable as a methodology for visualization of tumors and obtaining short-term pharmacokinetics in vivo. PMID:22803638

  11. Automated image analysis with the potential for process quality control applications in stem cell maintenance and differentiation.

    Science.gov (United States)

    Smith, David; Glen, Katie; Thomas, Robert

    2016-01-01

    The translation of laboratory processes into scaled production systems suitable for manufacture is a significant challenge for cell based therapies; in particular there is a lack of analytical methods that are informative and efficient for process control. Here the potential of image analysis as one part of the solution to this issue is explored, using pluripotent stem cell colonies as a valuable and challenging exemplar. The Cell-IQ live cell imaging platform was used to build image libraries of morphological culture attributes such as colony "edge," "core periphery" or "core" cells. Conventional biomarkers, such as Oct3/4, Nanog, and Sox-2, were shown to correspond to specific morphologies using immunostaining and flow cytometry techniques. Quantitative monitoring of these morphological attributes in-process using the reference image libraries showed rapid sensitivity to changes induced by different media exchange regimes or the addition of mesoderm lineage inducing cytokine BMP4. The imaging sample size to precision relationship was defined for each morphological attribute to show that this sensitivity could be achieved with a relatively low imaging sample. Further, the morphological state of single colonies could be correlated to individual colony outcomes; smaller colonies were identified as optimum for homogenous early mesoderm differentiation, while larger colonies maintained a morphologically pluripotent core. Finally, we show the potential of the same image libraries to assess cell number in culture with accuracy comparable to sacrificial digestion and counting. The data supports a potentially powerful role for quantitative image analysis in the setting of in-process specifications, and also for screening the effects of process actions during development, which is highly complementary to current analysis in optimization and manufacture.

  12. 78 FR 44142 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-07-23

    ... Automation Program (NCAP) test called the Document Image System (DIS) test. See 77 FR 20835. The DIS test... Automation Program Test of Automated Manifest Capabilities for Ocean and Rail Carriers: 76 FR 42721 (July 19... (SE test). See 76 FR 69755. The SE test established new entry capability to simplify the entry...

  13. Deployment of a Fully-Automated Green Fluorescent Protein Imaging System in a High Arctic Autonomous Greenhouse

    Directory of Open Access Journals (Sweden)

    Alain Berinstain

    2013-03-01

    Full Text Available Higher plants are an integral part of strategies for sustained human presence in space. Space-based greenhouses have the potential to provide closed-loop recycling of oxygen, water and food. Plant monitoring systems with the capacity to remotely observe the condition of crops in real-time within these systems would permit operators to take immediate action to ensure optimum system yield and reliability. One such plant health monitoring technique involves the use of reporter genes driving fluorescent proteins as biological sensors of plant stress. In 2006 an initial prototype green fluorescent protein imager system was deployed at the Arthur Clarke Mars Greenhouse located in the Canadian High Arctic. This prototype demonstrated the advantageous of this biosensor technology and underscored the challenges in collecting and managing telemetric data from exigent environments. We present here the design and deployment of a second prototype imaging system deployed within and connected to the infrastructure of the Arthur Clarke Mars Greenhouse. This is the first imager to run autonomously for one year in the un-crewed greenhouse with command and control conducted through the greenhouse satellite control system. Images were saved locally in high resolution and sent telemetrically in low resolution. Imager hardware is described, including the custom designed LED growth light and fluorescent excitation light boards, filters, data acquisition and control system, and basic sensing and environmental control. Several critical lessons learned related to the hardware of small plant growth payloads are also elaborated.

  14. Automation of the radiosynthesis and purification procedures for [18F]Fluspidine preparation, a new radiotracer for clinical investigations in PET imaging of σ1 receptors in brain

    International Nuclear Information System (INIS)

    The radiosynthesis of [18F]Fluspidine, a potent σ1 receptor imaging probe for pre-clinical/clinical studies, was implemented on a TRACERlabTM FX F-N synthesizer. [18F]2 was synthesized in 15 min at 85 °C starting from its tosylate precursor. Purification via semi-preparative RP-HPLC was investigated using different columns and eluent compositions and was most successful on a polar RP phase with acetonitrile/water buffered with NH4OAc. After solid phase extraction, [18F]Fluspidine was formulated and produced within 59±4 min with an overall radiochemical yield of 37±8%, a radiochemical purity of 99.3±0.5% and high specific activity (176.6±52.0 GBq/µmol). - Highlights: • [18F]Fluspidine is a promising radiotracer for PET imaging of sigma1 receptors. • A fully automated CGMP-oriented radiosynthesis of [18F]Fluspidine is described. • The purification was investigated using different semi-preparative HPLC systems. • [18F]Fluspidine was produced within 59±4 min with a radiochemical yield of 37±8%

  15. Automated Classification of Disease Patterns from Echo-cardiography Images Based on Shape Features of the Left Ventricle

    International Nuclear Information System (INIS)

    Computer assisted diagnosis using analysis of medical images is an area of active research in health informatics. This paper proposes a technique for indication of heart diseases by using information related to shapes of the left ventricle (LV). LV boundaries are tracked from echo-cardiography images taken from LV short axis view, corresponding to two disease conditions viz. dilated cardiomyopathy and hypertrophic cardiomyopathy, and discriminated from the normal condition. The LV shapes are modeled using shape histograms generated by plotting the frequency of normalized radii lengths drawn from the centroid to the periphery, against a specific number of bins. A 3-layer neural network activated by a log-sigmoid function is used to classify the shape histograms into one of the three classes. Experimentations on a dataset of 240 images show recognition accuracies of the order of 80%.

  16. Automated Wetland Delineation from Multi-Frequency and Multi-Polarized SAR Images in High Temporal and Spatial Resolution

    Science.gov (United States)

    Moser, L.; Schmitt, A.; Wendleder, A.

    2016-06-01

    Water scarcity is one of the main challenges posed by the changing climate. Especially in semi-arid regions where water reservoirs are filled during the very short rainy season, but have to store enough water for the extremely long dry season, the intelligent handling of water resources is vital. This study focusses on Lac Bam in Burkina Faso, which is the largest natural lake of the country and of high importance for the local inhabitants for irrigated farming, animal watering, and extraction of water for drinking and sanitation. With respect to the competition for water resources an independent area-wide monitoring system is essential for the acceptance of any decision maker. The following contribution introduces a weather and illumination independent monitoring system for the automated wetland delineation with a high temporal (about two weeks) and a high spatial sampling (about five meters). The similarities of the multispectral and multi-polarized SAR acquisitions by RADARSAT-2 and TerraSAR-X are studied as well as the differences. The results indicate that even basic approaches without pre-classification time series analysis or post-classification filtering are already enough to establish a monitoring system of prime importance for a whole region.

  17. Automated gamma knife radiosurgery treatment planning with image registration, data-mining, and Nelder-Mead simplex optimization

    International Nuclear Information System (INIS)

    Gamma knife treatments are usually planned manually, requiring much expertise and time. We describe a new, fully automatic method of treatment planning. The treatment volume to be planned is first compared with a database of past treatments to find volumes closely matching in size and shape. The treatment parameters of the closest matches are used as starting points for the new treatment plan. Further optimization is performed with the Nelder-Mead simplex method: the coordinates and weight of the isocenters are allowed to vary until a maximally conformal plan specific to the new treatment volume is found. The method was tested on a randomly selected set of 10 acoustic neuromas and 10 meningiomas. Typically, matching a new volume took under 30 seconds. The time for simplex optimization, on a 3 GHz Xeon processor, ranged from under a minute for small volumes (30 000 cubic mm,>20 isocenters). In 8/10 acoustic neuromas and 8/10 meningiomas, the automatic method found plans with conformation number equal or better than that of the manual plan. In 4/10 acoustic neuromas and 5/10 meningiomas, both overtreatment and undertreatment ratios were equal or better in automated plans. In conclusion, data-mining of past treatments can be used to derive starting parameters for treatment planning. These parameters can then be computer optimized to give good plans automatically

  18. AUTOMATED WETLAND DELINEATION FROM MULTI-FREQUENCY AND MULTI-POLARIZED SAR IMAGES IN HIGH TEMPORAL AND SPATIAL RESOLUTION

    Directory of Open Access Journals (Sweden)

    L. Moser

    2016-06-01

    Full Text Available Water scarcity is one of the main challenges posed by the changing climate. Especially in semi-arid regions where water reservoirs are filled during the very short rainy season, but have to store enough water for the extremely long dry season, the intelligent handling of water resources is vital. This study focusses on Lac Bam in Burkina Faso, which is the largest natural lake of the country and of high importance for the local inhabitants for irrigated farming, animal watering, and extraction of water for drinking and sanitation. With respect to the competition for water resources an independent area-wide monitoring system is essential for the acceptance of any decision maker. The following contribution introduces a weather and illumination independent monitoring system for the automated wetland delineation with a high temporal (about two weeks and a high spatial sampling (about five meters. The similarities of the multispectral and multi-polarized SAR acquisitions by RADARSAT-2 and TerraSAR-X are studied as well as the differences. The results indicate that even basic approaches without pre-classification time series analysis or post-classification filtering are already enough to establish a monitoring system of prime importance for a whole region.

  19. Automated classification and visualization of healthy and pathological dental tissues based on near-infrared hyper-spectral imaging

    Science.gov (United States)

    Usenik, Peter; Bürmen, Miran; Vrtovec, Tomaž; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots which are difficult to diagnose. If detected early enough, such demineralization can be arrested and reversed by non-surgical means through well established dental treatments (fluoride therapy, anti-bacterial therapy, low intensity laser irradiation). Near-infrared (NIR) hyper-spectral imaging is a new promising technique for early detection of demineralization based on distinct spectral features of healthy and pathological dental tissues. In this study, we apply NIR hyper-spectral imaging to classify and visualize healthy and pathological dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized areas. For this purpose, a standardized teeth database was constructed consisting of 12 extracted human teeth with different degrees of natural dental lesions imaged by NIR hyper-spectral system, X-ray and digital color camera. The color and X-ray images of teeth were presented to a clinical expert for localization and classification of the dental tissues, thereby obtaining the gold standard. Principal component analysis was used for multivariate local modeling of healthy and pathological dental tissues. Finally, the dental tissues were classified by employing multiple discriminant analysis. High agreement was observed between the resulting classification and the gold standard with the classification sensitivity and specificity exceeding 85 % and 97 %, respectively. This study demonstrates that NIR hyper-spectral imaging has considerable diagnostic potential for imaging hard dental tissues.

  20. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    Science.gov (United States)

    Alvarenga de Moura Meneses, Anderson; Giusti, Alessandro; de Almeida, André Pereira; Parreira Nogueira, Liebert; Braz, Delson; Cely Barroso, Regina; deAlmeida, Carlos Eduardo

    2011-12-01

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography (μCT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-μCT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-μCT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-μCT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  1. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga de Moura Meneses, Anderson, E-mail: ameneses@ieee.org [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Pereira de Almeida, Andre; Parreira Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro, RJ (Brazil); Cely Barroso, Regina [Laboratory of Applied Physics on Biomedical Sciences, Physics Department, Rio de Janeiro State University, RJ (Brazil); Almeida, Carlos Eduardo de [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil)

    2011-12-21

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography ({mu}CT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-{mu}CT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-{mu}CT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-{mu}CT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  2. Automated detection and quantitative measurement of small rounded opacities in X-ray CT images of pneumoconiosis

    International Nuclear Information System (INIS)

    This paper presents a new method for quantitative diagnosis of pneumoconiosis by using X-ray CT images. The method consists of extraction of lung regions, detection of small rounded opacities, and measurement of profusion and size of the opacities. A kind of directional difference operator is proposed for detection of the opacities, which enhances opacities as well as suppresses the shadows of blood vessels. Furthermore, we develop a method to measure the profusion and the size of the opacities to classify pneumoconiosis X-ray CT images. (author)

  3. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 109 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author)

  4. Automation of the method gamma of comparison dosimetry images; Automatizacion del metodo gamma de comparacion de imagenes dosimetricas

    Energy Technology Data Exchange (ETDEWEB)

    Moreno Reyes, J. C.; Macias Jaen, J.; Arrans Lara, R.

    2013-07-01

    The objective of this work was the development of JJGAMMA application analysis software, which enables this task systematically, minimizing intervention specialist and therefore the variability due to the observer. Both benefits, allow comparison of images is done in practice with the required frequency and objectivity. (Author)

  5. Automated enumeration and viability measurement of canine stromal vascular fraction cells using fluorescence-based image cytometry method.

    Science.gov (United States)

    Chan, Leo Li-Ying; Cohen, Donald A; Kuksin, Dmitry; Paradis, Benjamin D; Qiu, Jean

    2014-07-01

    In recent years, the lipoaspirate collected from adipose tissue has been seen as a valuable source of adipose-derived mesenchymal stem cells for autologous cellular therapy. For multiple applications, adipose-derived mesenchymal stem cells are isolated from the stromal vascular fraction (SVF) of adipose tissue. Because the fresh stromal vascular fraction typically contains a heterogeneous mixture of cells, determining cell concentration and viability is a crucial step in preparing fraction samples for downstream processing. Due to a large amount of cellular debris contained in the SVF sample, as well as counting irregularities standard manual counting can lead to inconsistent results. Advancements in imaging and optics technologies have significantly improved the image-based cytometric analysis method. In this work, we validated the use of fluorescence-based image cytometry for SVF concentration and viability measurement, by comparing to standard flow cytometry and manual hemocytometer. The concentration and viability of freshly collected canine SVF samples are analyzed, and the results highly correlated between all three methods, which validated the image cytometry method for canine SVF analysis, and potentially for SVF from other species. PMID:24740550

  6. A new automated method for analysis of gated-SPECT images based on a three-dimensional heart shaped model

    DEFF Research Database (Denmark)

    Lomsky, Milan; Richter, Jens; Johansson, Lena;

    2005-01-01

    SIMIND were used to simulate the studies. Finally CAFU was validated on ten rest studies from patients referred for routine stress/rest myocardial perfusion scintigraphy and compared with Cedar-Sinai quantitative gated-SPECT (QGS), a commercially available program for quantification of gated-SPECT images...

  7. Automated tru-cut imaging-guided core needle biopsy of canine orbital neoplasia. A prospective feasibility study

    Science.gov (United States)

    Cirla, A.; Rondena, M.; Bertolini, G.

    2016-01-01

    The purpose of this study was to evaluate the diagnostic value of imaging-guided core needle biopsy for canine orbital mass diagnosis. A second excisional biopsy obtained during surgery or necropsy was used as the reference standard. A prospective feasibility study was conducted in 23 canine orbital masses at a single centre. A complete ophthalmic examination was always followed by orbital ultrasound and computed tomography (CT) examination of the head. All masses were sampled with the patient still on the CT table using ultrasound (US) guided automatic tru-cut device. The most suitable sampling approach to the orbit was chosen each time based on the CT image analysis. One of the following different approaches was used: trans-orbital, trans-conjunctival or trans-masseteric. In all cases, the imaging-guided biopsy provided a sufficient amount of tissue for the histopathological diagnosis, which concurred with the biopsies obtained using the excisional technique. CT examination was essential for morphological diagnosis and provided detailed topographic information that allowed us to choose the safest orbital approach for the biopsy. US guided automatic tru-cut biopsy based on CT images, performed with patient still on the CT table, resulted in a minimally invasive, relatively easy, and accurate diagnostic procedure in dogs with orbital masses. PMID:27540512

  8. Automated tru-cut imaging-guided core needle biopsy of canine orbital neoplasia. A prospective feasibility study

    Directory of Open Access Journals (Sweden)

    A. Cirla

    2016-07-01

    Full Text Available The purpose of this study was to evaluate the diagnostic value of imaging-guided core needle biopsy for canine orbital mass diagnosis. A second excisional biopsy obtained during surgery or necropsy was used as the reference standard. A prospective feasibility study was conducted in 23 canine orbital masses at a single centre. A complete ophthalmic examination was always followed by orbital ultrasound and computed tomography (CT examination of the head. All masses were sampled with the patient still on the CT table using ultrasound (US guided automatic tru-cut device. The most suitable sampling approach to the orbit was chosen each time based on the CT image analysis. One of the following different approaches was used: trans-orbital, trans-conjunctival or trans-masseteric. In all cases, the imaging-guided biopsy provided a sufficient amount of tissue for the histopathological diagnosis, which concurred with the biopsies obtained using the excisional technique. CT examination was essential for morphological diagnosis and provided detailed topographic information that allowed us to choose the safest orbital approach for the biopsy. US guided automatic tru-cut biopsy based on CT images, performed with patient still on the CT table, resulted in a minimally invasive, relatively easy, and accurate diagnostic procedure in dogs with orbital masses.

  9. Statistical Analysis of Filament Features Based on the Hα Solar Images from 1988 to 2013 by Computer Automated Detection Method

    Science.gov (United States)

    Hao, Q.; Fang, C.; Cao, W.; Chen, P. F.

    2015-12-01

    We improve our filament automated detection method which was proposed in our previous works. It is then applied to process the full disk Hα data mainly obtained by the Big Bear Solar Observatory from 1988 to 2013, spanning nearly three solar cycles. The butterfly diagrams of the filaments, showing the information of the filament area, spine length, tilt angle, and the barb number, are obtained. The variations of these features with the calendar year and the latitude band are analyzed. The drift velocities of the filaments in different latitude bands are calculated and studied. We also investigate the north-south (N-S) asymmetries of the filament numbers in total and in each subclass classified according to the filament area, spine length, and tilt angle. The latitudinal distribution of the filament number is found to be bimodal. About 80% of all the filaments have tilt angles within [0°, 60°]. For the filaments within latitudes lower (higher) than 50°, the northeast (northwest) direction is dominant in the northern hemisphere and the southeast (southwest) direction is dominant in the southern hemisphere. The latitudinal migrations of the filaments experience three stages with declining drift velocities in each of solar cycles 22 and 23, and it seems that the drift velocity is faster in shorter solar cycles. Most filaments in latitudes lower (higher) than 50° migrate toward the equator (polar region). The N-S asymmetry indices indicate that the southern hemisphere is the dominant hemisphere in solar cycle 22 and the northern hemisphere is the dominant one in solar cycle 23.

  10. Methods and measurement variance for field estimations of coral colony planar area using underwater photographs and semi-automated image segmentation.

    Science.gov (United States)

    Neal, Benjamin P; Lin, Tsung-Han; Winter, Rivah N; Treibitz, Tali; Beijbom, Oscar; Kriegman, David; Kline, David I; Greg Mitchell, B

    2015-08-01

    Size and growth rates for individual colonies are some of the most essential descriptive parameters for understanding coral communities, which are currently experiencing worldwide declines in health and extent. Accurately measuring coral colony size and changes over multiple years can reveal demographic, growth, or mortality patterns often not apparent from short-term observations and can expose environmental stress responses that may take years to manifest. Describing community size structure can reveal population dynamics patterns, such as periods of failed recruitment or patterns of colony fission, which have implications for the future sustainability of these ecosystems. However, rapidly and non-invasively measuring coral colony sizes in situ remains a difficult task, as three-dimensional underwater digital reconstruction methods are currently not practical for large numbers of colonies. Two-dimensional (2D) planar area measurements from projection of underwater photographs are a practical size proxy, although this method presents operational difficulties in obtaining well-controlled photographs in the highly rugose environment of the coral reef, and requires extensive time for image processing. Here, we present and test the measurement variance for a method of making rapid planar area estimates of small to medium-sized coral colonies using a lightweight monopod image-framing system and a custom semi-automated image segmentation analysis program. This method demonstrated a coefficient of variation of 2.26% for repeated measurements in realistic ocean conditions, a level of error appropriate for rapid, inexpensive field studies of coral size structure, inferring change in colony size over time, or measuring bleaching or disease extent of large numbers of individual colonies. PMID:26156316

  11. Automating Finance

    Science.gov (United States)

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  12. Automated Image Processing for Spatially Resolved Analysis of Lipid Droplets in Cultured 3T3-L1 Adipocytes

    OpenAIRE

    Sims, James Kenneth; Rohr, Brian; Miller, Eric; Lee, Kyongbum

    2014-01-01

    Cellular hypertrophy of adipose tissue underlies many of the proposed proinflammatory mechanisms for obesity-related diseases. Adipose hypertrophy results from an accumulation of esterified lipids (triglycerides) into membrane-enclosed intracellular lipid droplets (LDs). The coupling between adipocyte metabolism and LD morphology could be exploited to investigate biochemical regulation of lipid pathways by monitoring the dynamics of LDs. This article describes an image processing method to id...

  13. An Attempt to automate the lithological classification of rocks using geological, gamma-spectrometric and satellite image datasets

    International Nuclear Information System (INIS)

    The present study aims essentially at proving that the application of the integrated airborne gamma spectrometric and satellite image data is capable of refining the mapped surface geology, and identification of anomalous zones of radioelement content that could provide favorable exploration targets for radioactive mineralizations.The application of the appropriate statistical technique to correlate between satellite image data and gamma-spectrometric data is of great significance in this respect. Experience shows that Landsat T M data in 7 spectral bands are successfully used in such studies rather than MSS. Multivariate statistical analysis techniques are applied to airborne spectrometric and different spectral Landsat T M data. Reduction of the data from n-dimensionality, both qualitatively as color composite image, and quantitatively, as principal component analysis, is performed using some statistical control parameters. This technique shows distinct efficiency in defining areas where different lit ho facies occur. An area located at the north of the Eastern Desert of Egypt, north of Hurgada town, was chosen to test the proposed technique of integrated interpretation of data of different physical nature. The reduced data are represented and interpreted both qualitatively and quantitatively. The advantages and limitations of applying such technique to the different airborne spectrometric, and Landsat T M data are identified. (authors)

  14. High-resolution Time-lapse Imaging and Automated Analysis of Microtubule Dynamics in Living Human Umbilical Vein Endothelial Cells.

    Science.gov (United States)

    Braun, Alexander; Caesar, Nicole M; Dang, Kyvan; Myers, Kenneth A

    2016-01-01

    The physiological process by which new vasculature forms from existing vasculature requires specific signaling events that trigger morphological changes within individual endothelial cells (ECs). These processes are critical for homeostatic maintenance such as wound healing, and are also crucial in promoting tumor growth and metastasis. EC morphology is defined by the organization of the cytoskeleton, a tightly regulated system of actin and microtubule (MT) dynamics that is known to control EC branching, polarity and directional migration, essential components of angiogenesis. To study MT dynamics, we used high-resolution fluorescence microscopy coupled with computational image analysis of fluorescently-labeled MT plus-ends to investigate MT growth dynamics and the regulation of EC branching morphology and directional migration. Time-lapse imaging of living Human Umbilical Vein Endothelial Cells (HUVECs) was performed following transfection with fluorescently-labeled MT End Binding protein 3 (EB3) and Mitotic Centromere Associated Kinesin (MCAK)-specific cDNA constructs to evaluate effects on MT dynamics. PlusTipTracker software was used to track EB3-labeled MT plus ends in order to measure MT growth speeds and MT growth lifetimes in time-lapse images. This methodology allows for the study of MT dynamics and the identification of how localized regulation of MT dynamics within sub-cellular regions contributes to the angiogenic processes of EC branching and migration. PMID:27584860

  15. ALIGNING INFORMATION SECURITY WITH THE IMAGE OF THE ORGANIZATION AND PRIORITIZATION BASED ON FUZZY LOGIC FOR THE INDUSTRIAL AUTOMATION SECTOR

    Directory of Open Access Journals (Sweden)

    Adolfo Alberto Vanti

    2011-12-01

    Full Text Available This paper develops the strategic alignment of organizational behavior through the organizations´ image, prioritization and information security practices. To this end, information security is studied based on the business requirements of confidentiality, integrity and availability by applying a tool which integrates the strategic, tactical and operational vision through the following framework: Balanced Scorecard - BSC (strategic x Control Objectives for Information and Related Technology - COBIT (tactical x International Organization for Standardization - ISO/International Electro Technical Commission - IEC27002 (operational. Another image instrument of the organization is applied in parallel with this analysis to identify and analyze performance involving profiles related to mechanistic, psychic prisons, political systems, instruments of domination, organisms, cybernetics, flux and transformation (MORGAN, 1996. Finally, a model of strategic prioritization, based on compensatory fuzzy logic (ESPIN and VANTI, 2005, is applied. The method was applied to an industrial company located in southern Brazil. The results with the application show two organizational images: "organism" and "flux and transformation ". The strategic priorities indicated a significant search for new business services and international markets. Regarding protection of information, security found the gap between "minimum" and "Reasonable" and in domain 8 (HR of standard ISO/IEC27002, considered 71% protection as "inappropriate" and "minimal" in the IT Governance context.

  16. Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In

    Science.gov (United States)

    Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.

    2013-01-01

    Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of

  17. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  18. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    Science.gov (United States)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  19. CARES: Completely Automated Robust Edge Snapper for carotid ultrasound IMT measurement on a multi-institutional database of 300 images: a two stage system combining an intensity-based feature approach with first order absolute moments

    Science.gov (United States)

    Molinari, Filippo; Acharya, Rajendra; Zeng, Guang; Suri, Jasjit S.

    2011-03-01

    The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 +/- 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-

  20. Total Mini-Mental State Examination score and regional cerebral blood flow using Z score imaging and automated ROI analysis software in subjects with memory impairment

    International Nuclear Information System (INIS)

    The Mini-Mental State Examination (MMSE) is considered a useful supplementary method to diagnose dementia and evaluate the severity of cognitive disturbance. However, the region of the cerebrum that correlates with the MMSE score is not clear. Recently, a new method was developed to analyze regional cerebral blood flow (rCBF) using a Z score imaging system (eZIS). This system shows changes of rCBF when compared with a normal database. In addition, a three-dimensional stereotaxic region of interest (ROI) template (3DSRT), fully automated ROI analysis software was developed. The objective of this study was to investigate the correlation between rCBF changes and total MMSE score using these new methods. The association between total MMSE score and rCBF changes was investigated in 24 patients (mean age±standard deviation (SD) 71.5±9.2 years; 6 men and 18 women) with memory impairment using eZIS and 3DSRT. Step-wise multiple regression analysis was used for multivariate analysis, with the total MMSE score as the dependent variable and rCBF change in 24 areas as the independent variable. Total MMSE score was significantly correlated only with the reduction of left hippocampal perfusion but not with right (P<0.01). Total MMSE score is an important indicator of left hippocampal function. (author)

  1. Automated detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images: multiscale hierachical expectation-maximization segmentation of vessels and PEs

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Cascade, Philip N.; Sahiner, Berkman; Wei, Jun; Ge, Jun; Kazerooni, Ella A.

    2007-03-01

    CT pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided detection (CAD) system to assist radiologist in PE detection in CTPA images. 3D multiscale filters in combination with a newly designed response function derived from the eigenvalues of Hessian matrices is used to enhance vascular structures including the vessel bifurcations and suppress non-vessel structures such as the lymphoid tissues surrounding the vessels. A hierarchical EM estimation is then used to segment the vessels by extracting the high response voxels at each scale. The segmented vessels are pre-screened for suspicious PE areas using a second adaptive multiscale EM estimation. A rule-based false positive (FP) reduction method was designed to identify the true PEs based on the features of PE and vessels. 43 CTPA scans were used as an independent test set to evaluate the performance of PE detection. Experienced chest radiologists identified the PE locations which were used as "gold standard". 435 PEs were identified in the artery branches, of which 172 and 263 were subsegmental and proximal to the subsegmental, respectively. The computer-detected volume was considered true positive (TP) when it overlapped with 10% or more of the gold standard PE volume. Our preliminary test results show that, at an average of 33 and 24 FPs/case, the sensitivities of our PE detection method were 81% and 78%, respectively, for proximal PEs, and 79% and 73%, respectively, for subsegmental PEs. The study demonstrates the feasibility that the automated method can identify PE accurately on CTPA images. Further study is underway to improve the sensitivity and reduce the FPs.

  2. Automated analysis of digital fundus autofluorescence images of geographic atrophy in advanced age-related macular degeneration using confocal scanning laser ophthalmoscopy (cSLO

    Directory of Open Access Journals (Sweden)

    Bindewald A

    2005-04-01

    Full Text Available Abstract Background Fundus autofluorescence (AF imaging using confocal scanning laser ophthalmoscopy (cSLO provides an accurate delineation of areas of geographic atrophy (GA. Automated computer-assisted methods for detecting and removing interfering vessels are needed to support the GA quantification process in longitudinal studies and in reading centres. Methods A test tool was implemented that uses region-growing techniques to segment GA areas. An algorithm for illuminating shadows can be used to process low-quality images. Agreement between observers and between three different methods was evaluated by two independent readers in a pilot study. Agreement and objectivity were assessed using the Bland-Altman approach. Results The new method (C identifies vascular structures that interfere with the delineation of GA. Results are comparable to those of two commonly used procedures (A, B, with a mean difference between C and A of -0.67 mm2 (95% CI [-0.99, -0.36], between B and A of -0.81 mm2, (95% CI [-1.08, -0.53], and between C and B of 0.15 mm2 (95% CI [-0.12, 0.41]. Objectivity of a method is quantified by the mean difference between observers: A 0.30 mm2 (95% CI [0.02, 0.57], B -0.11 mm2 (95% CI [-0.28, 0.10], and C 0.12 mm2 (95% CI [0.02, 0.22]. Conclusion The novel procedure is comparable with regard to objectivity and inter-reader agreement to established methods of quantifying GA. It considerably speeds up the lengthy measurement process in AF with well defined GA zones.

  3. Automation Security

    OpenAIRE

    Mirzoev, Dr. Timur

    2014-01-01

    Web-based Automated Process Control systems are a new type of applications that use the Internet to control industrial processes with the access to the real-time data. Supervisory control and data acquisition (SCADA) networks contain computers and applications that perform key functions in providing essential services and commodities (e.g., electricity, natural gas, gasoline, water, waste treatment, transportation) to all Americans. As such, they are part of the nation s critical infrastructu...

  4. An automated image analysis framework for segmentation and division plane detection of single live Staphylococcus aureus cells which can operate at millisecond sampling time scales using bespoke Slimfield microscopy

    CERN Document Server

    Wollman, Adam J M; Foster, Simon; Leake, Mark C

    2016-01-01

    Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphological...

  5. Automated Tubule Nuclei Quantification and Correlation with Oncotype DX risk categories in ER+ Breast Cancer Whole Slide Images.

    Science.gov (United States)

    Romo-Bucheli, David; Janowczyk, Andrew; Gilmore, Hannah; Romero, Eduardo; Madabhushi, Anant

    2016-01-01

    Early stage estrogen receptor positive (ER+) breast cancer (BCa) treatment is based on the presumed aggressiveness and likelihood of cancer recurrence. Oncotype DX (ODX) and other gene expression tests have allowed for distinguishing the more aggressive ER+ BCa requiring adjuvant chemotherapy from the less aggressive cancers benefiting from hormonal therapy alone. However these tests are expensive, tissue destructive and require specialized facilities. Interestingly BCa grade has been shown to be correlated with the ODX risk score. Unfortunately Bloom-Richardson (BR) grade determined by pathologists can be variable. A constituent category in BR grading is tubule formation. This study aims to develop a deep learning classifier to automatically identify tubule nuclei from whole slide images (WSI) of ER+ BCa, the hypothesis being that the ratio of tubule nuclei to overall number of nuclei (a tubule formation indicator - TFI) correlates with the corresponding ODX risk categories. This correlation was assessed in 7513 fields extracted from 174 WSI. The results suggests that low ODX/BR cases have a larger TFI than high ODX/BR cases (p < 0.01). The low ODX/BR cases also presented a larger TFI than that obtained for the rest of cases (p < 0.05). Finally, the high ODX/BR cases have a significantly smaller TFI than that obtained for the rest of cases (p < 0.01). PMID:27599752

  6. Analysis of magnetosome chains in magnetotactic bacteria by magnetic measurements and automated image analysis of electron micrographs.

    Science.gov (United States)

    Katzmann, E; Eibauer, M; Lin, W; Pan, Y; Plitzko, J M; Schüler, D

    2013-12-01

    Magnetotactic bacteria (MTB) align along the Earth's magnetic field by the activity of intracellular magnetosomes, which are membrane-enveloped magnetite or greigite particles that are assembled into well-ordered chains. Formation of magnetosome chains was found to be controlled by a set of specific proteins in Magnetospirillum gryphiswaldense and other MTB. However, the contribution of abiotic factors on magnetosome chain assembly has not been fully explored. Here, we first analyzed the effect of growth conditions on magnetosome chain formation in M. gryphiswaldense by electron microscopy. Whereas higher temperatures (30 to 35°C) and high oxygen concentrations caused increasingly disordered chains and smaller magnetite crystals, growth at 20°C and anoxic conditions resulted in long chains with mature cuboctahedron-shaped crystals. In order to analyze the magnetosome chain in electron microscopy data sets in a more quantitative and unbiased manner, we developed a computerized image analysis algorithm. The collected data comprised the cell dimensions and particle size and number as well as the intracellular position and extension of the magnetosome chain. The chain analysis program (CHAP) was used to evaluate the effects of the genetic and growth conditions on magnetosome chain formation. This was compared and correlated to data obtained from bulk magnetic measurements of wild-type (WT) and mutant cells displaying different chain configurations. These techniques were used to differentiate mutants due to magnetosome chain defects on a bulk scale. PMID:24096429

  7. Fully automated TV-image analysis of the cell-cycle: comparison of the PLM method with determinations of the percentage and the DNA content of labelled cells.

    Science.gov (United States)

    Wachsmuth, E D; van Golitschek, M; Macht, F; Maurer-Schultze, B

    1988-01-01

    A cell-cycle analysis based on a fully automated TV-image scanning system is proposed to replace the laborious PLM method. To compare the efficiency of the two procedures, cell-cycle parameters were assessed in Ehrlich (diploid and hyperdiploid), L-1210, and JB-1 mouse ascites tumours and in rat jejunal crypts. The percentages of labelled mitoses (PLM) were counted visually on Feulgen-stained autoradiographs obtained at various times after a single 3H-thymidine pulse. The fraction of labelled cells (P) and the DNA ratio of labelled and unlabelled cells were measured by TV-image analysis in the same slides and plotted against time. Within practical limits, TV-image analysis using the P-curve gives the same results as the PLM method. Using the P-curve has the important advantage that its first part, beginning at the time of 3H-thymidine injection and ending at the first maximum, furnishes more information about the cell cycle than the corresponding part of the PLM curve. It can be used to compute tG2M tS and the ratio of the growth faction index to the cell-cycle time (IP/tC) whereas the first part of the PLM-curve reveals only the length of the S-phase (tS). The IP/tC ratio is a readily accessible measure of growth and increases when the cells divide more frequently. Cell death rates may be neglected since the ratio is determined within less than the duration of one cell cycle. Moreover, the data from the first part of the P curve indicate whether there is a large non-growth fraction. If the non-growth fraction is small, i.e. if IP approximately 1, the P curve need only be measured until the first maximum is reached so that fewer samples and animals are required. If the non-growth fraction is large or unknown, the cell-cycle parameters are calculated by reference to the position and size not only of the first minimum and the first maximum, but also of the second minimum of the P curve.

  8. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  9. Automated tubule nuclei quantification and correlation with oncotype DX risk categories in ER+ breast cancer whole slide images

    Science.gov (United States)

    Romo-Bucheli, David; Janowczyk, Andrew; Romero, Eduardo; Gilmore, Hannah; Madabhushi, Anant

    2016-03-01

    Early stage estrogen receptor positive (ER+) breast cancer (BCa) treatment is based on the presumed aggressiveness and likelihood of cancer recurrence. The primary conundrum in treatment and management of early stage ER+ BCa is identifying which of these cancers are candidates for adjuvant chemotherapy and which patients will respond to hormonal therapy alone. This decision could spare some patients the inherent toxicity associated with adjuvant chemotherapy. Oncotype DX (ODX) and other gene expression tests have allowed for distinguishing the more aggressive ER+ BCa requiring adjuvant chemotherapy from the less aggressive cancers benefiting from hormonal therapy alone. However these gene expression tests tend to be expensive, tissue destructive and require physical shipping of tissue blocks for the test to be done. Interestingly breast cancer grade in these tumors has been shown to be highly correlated with the ODX risk score. Unfortunately studies have shown that Bloom-Richardson (BR) grade determined by pathologists can be highly variable. One of the constituent categories in BR grading is the quantification of tubules. The goal of this study was to develop a deep learning neural network classifier to automatically identify tubule nuclei from whole slide images (WSI) of ER+ BCa, the hypothesis being that the ratio of tubule nuclei to overall number of nuclei would correlate with the corresponding ODX risk categories. The performance of the tubule nuclei deep learning strategy was evaluated with a set of 61 high power fields. Under a 5-fold cross-validation, the average precision and recall measures were 0:72 and 0:56 respectively. In addition, the correlation with the ODX risk score was assessed in a set of 7513 high power fields extracted from 174 WSI, each from a different patient (At most 50 high power fields per patient study were used). The ratio between the number of tubule and non-tubule nuclei was computed for each WSI. The results suggests that for BCa

  10. Automation in biological crystallization.

    Science.gov (United States)

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  11. Automated visual inspection of textile

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1997-01-01

    A method for automated inspection of two types of textile is presented. The goal of the inspection is to determine defects in the textile. A prototype is constructed for simulating the textile production line. At the prototype the images of the textile are acquired by a high speed line scan camera...

  12. Automated Bone Scan Index as a quantitative imaging biomarker in metastatic castration-resistant prostate cancer patients being treated with enzalutamide

    DEFF Research Database (Denmark)

    Anand, Aseem; Morris, Michael J; Larson, Steven M;

    2016-01-01

    BACKGROUND: Having performed analytical validation studies, we are now assessing the clinical utility of the upgraded automated Bone Scan Index (BSI) in metastatic castration-resistant prostate cancer (mCRPC). In the present study, we retrospectively evaluated the discriminatory strength......-specific antigen (PSA), hemoglobin (HgB), and alkaline phosphatase (ALP) were obtained at baseline. Change in automated BSI and PSA were obtained from patients who have had bone scan at week 12 of treatment follow-up. Automated BSI was obtained using the analytically validated EXINI Bone(BSI) version 2. Kendall...... = 0.017. Treatment follow-up bone scans were available from 62 patients. Both change in BSI and percent change in PSA were predictive of OS. However, the combined predictive model of percent PSA change and change in automated BSI (C-index 0.77) was significantly higher than that of percent PSA change...

  13. Automation of data acquisition in electron crystallography.

    Science.gov (United States)

    Cheng, Anchi

    2013-01-01

    General considerations for using automation software for acquiring high-resolution images of 2D crystals under low-dose conditions are presented. Protocol modifications specific to this application in Leginon are provided.

  14. Automated Periodontal Diseases Classification System

    OpenAIRE

    Aliaa A. A. Youssif; Abeer Saad Gawish,; Mohammed Elsaid Moussa

    2012-01-01

    This paper presents an efficient and innovative system for automated classification of periodontal diseases, The strength of our technique lies in the fact that it incorporates knowledge from the patients' clinical data, along with the features automatically extracted from the Haematoxylin and Eosin (H&E) stained microscopic images. Our system uses image processing techniques based on color deconvolution, morphological operations, and watershed transforms for epithelium & connective tissue se...

  15. Automated Budget System

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  16. The evaluation of a deformable image registration segmentation technique for semi-automating internal target volume (ITV) production from 4DCT images of lung stereotactic body radiotherapy (SBRT) patients

    International Nuclear Information System (INIS)