WorldWideScience

Sample records for automated content-based image

  1. Evaluation of a content-based retrieval system for blood cell images with automated methods.

    Science.gov (United States)

    Seng, Woo Chaw; Mirisaee, Seyed Hadi

    2011-08-01

    Content-based image retrieval techniques have been extensively studied for the past few years. With the growth of digital medical image databases, the demand for content-based analysis and retrieval tools has been increasing remarkably. Blood cell image is a key diagnostic tool for hematologists. An automated system that can retrieved relevant blood cell images correctly and efficiently would save the effort and time of hematologists. The purpose of this work is to develop such a content-based image retrieval system. Global color histogram and wavelet-based methods are used in the prototype. The system allows users to search by providing a query image and select one of four implemented methods. The obtained results demonstrate the proposed extended query refinement has the potential to capture a user's high level query and perception subjectivity by dynamically giving better query combinations. Color-based methods performed better than wavelet-based methods with regard to precision, recall rate and retrieval time. Shape and density of blood cells are suggested as measurements for future improvement. The system developed is useful for undergraduate education. PMID:20703533

  2. Multimedia input in automated image annotation and content-based retrieval

    Science.gov (United States)

    Srihari, Rohini K.

    1995-03-01

    This research explores the interaction of linguistic and photographic information in an integrated text/image database. By utilizing linguistic descriptions of a picture (speech and text input) coordinated with pointing references to the picture, we extract information useful in two aspects: image interpretation and image retrieval. In the image interpretation phase, objects and regions mentioned in the text are identified; the annotated image is stored in a database for future use. We incorporate techniques from our previous research on photo understanding using accompanying text: a system, PICTION, which identifies human faces in a newspaper photograph based on the caption. In the image retrieval phase, images matching natural language queries are presented to a user in a ranked order. This phase combines the output of (1) the image interpretation/annotation phase, (2) statistical text retrieval methods, and (3) image retrieval methods (e.g., color indexing). The system allows both point and click querying on a given image as well as intelligent querying across the entire text/image database.

  3. CONTENT BASED BATIK IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    A. Haris Rangkuti

    2014-01-01

    Full Text Available Content Based Batik Image Retrieval (CBBIR is an area of research that focuses on image processing based on characteristic motifs of batik. Basically the image has a unique batik motif compared with other images. Its uniqueness lies in the characteristics possessed texture and shape, which has a unique and distinct characteristics compared with other image characteristics. To study this batik image must start from a preprocessing stage, in which all its color images must be removed with a grayscale process. Proceed with the feature extraction process taking motifs characteristic of every kind of batik using the method of edge detection. After getting the characteristic motifs seen visually, it will be calculated by using 4 texture characteristic function is the mean, energy, entropy and stadard deviation. Characteristic function will be added as needed. The results of the calculation of characteristic functions will be made more specific using the method of wavelet transform Daubechies type 2 and invariant moment. The result will be the index value of every type of batik. Because each motif there are the same but have different sizes, so any kind of motive would be divided into three sizes: Small, medium and large. The perfomance of Batik Image similarity using this method about 90-92%.

  4. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  5. Metadata for Content-Based Image Retrieval

    OpenAIRE

    Adrian Sterca; Daniela Miron

    2010-01-01

    This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  6. SURVEY ON CONTENT BASED IMAGE RETRIEVAL

    OpenAIRE

    S.R.Surya; G. Sasikala

    2011-01-01

    The digital image data is rapidly expanding in quantity and heterogeneity. The traditional information retrieval techniques does not meet the user’s demand, so there is need to develop an efficient system for content based image retrieval. The content based image retrieval are becoming a source of exact and fast retrieval. In thispaper the techniques of content based image retrieval are discussed, analysed and compared. Here, to compared features as color correlogram, texture, shape, edge den...

  7. Content Base Image Retrieval Using Phong Shading

    OpenAIRE

    Uday Pratap Singh; Sanjeev Jain; Gulfishan Firdose Ahmed

    2010-01-01

    The digital image data is rapidly expanding in quantity and heterogeneity. The traditional information retrieval techniques does not meet the user’s demand, so there is need to develop an efficient system for content based image retrieval. Content based image retrieval means retrieval of images from database on the basis of visual features of image like as color, texture etc. In our proposed method feature are extracted after applying Phong shading on input image. Phong shading, flattering ou...

  8. CONTENT BASED IMAGE RETRIEVAL : A REVIEW

    OpenAIRE

    Shereena V.B; Julie M.David

    2014-01-01

    In a content-based image retrieval system (CBIR), the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color corr...

  9. Content Based Image Indexing and Retrieval

    OpenAIRE

    Bhute, Avinash N; B B Meshram

    2014-01-01

    In this paper, we present the efficient content based image retrieval systems which employ the color, texture and shape information of images to facilitate the retrieval process. For efficient feature extraction, we extract the color, texture and shape feature of images automatically using edge detection which is widely used in signal processing and image compression. For facilitated the speedy retrieval we are implements the antipole-tree algorithm for indexing the images.

  10. A Survey: Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Javeria Ami

    2014-05-01

    Full Text Available The field of image processing is addressed significantly by the role of CBIR. Peculiar query is the main feature on which the image retrieval of content based problems is dependent. Relevant information is required for the submission of sketches or drawing and similar type of features. Many algorithms are used for the extraction of features which are related to similar nature. The process can be optimized by the use of feedback from the retrieval step. Analysis of colour and shape can be done by the visual contents of image. Here neural network, Relevance feedback techniques based on image retrieval are discussed.

  11. Content based Image Retrieval from Forensic Image Databases

    OpenAIRE

    Swati A. Gulhane; Dr. Ajay. A. Gurjar

    2015-01-01

    Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, im...

  12. Content based Image Retrieval from Forensic Image Databases

    Directory of Open Access Journals (Sweden)

    Swati A. Gulhane

    2015-03-01

    Full Text Available Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color, texture and shape are three important visual features of an image. Searching for images using color, texture and shape features has attracted much attention. There are many content based image retrieval techniques in the literature. This paper gives the overview of different existing methods used for content based image retrieval and also suggests an efficient image retrieval method for digital image database of criminal photos, using dynamic dominant color, texture and shape features of an image which will give an effective retrieval result.

  13. Color Emotions in Large Scale Content Based Image Indexing

    OpenAIRE

    Solli, Martin

    2011-01-01

    Traditional content based image indexing aims at developing algorithms that can analyze and index images based on their visual content. A typical approach is to measure image attributes, like colors or textures, and save the result in image descriptors, which then can be used in recognition and retrieval applications. Two topics within content based image indexing are addressed in this thesis: Emotion based image indexing, and font recognition. The main contribution is the inclusion of high-l...

  14. CONTENT BASED MEDICAL IMAGE RETRIEVAL USING BINARY ASSOCIATION RULES

    OpenAIRE

    Akila; Uma Maheswari

    2013-01-01

    In this study, we propose a content-based medical image retrieval framework based on binary association rules to augment the results of medical image diagnosis, for supporting clinical decision making. Specifically, this work is employed on scanned Magnetic Resonance brain Images (MRI) and the proposed Content Based Image Retrieval (CBIR) process is for enhancing relevancy rate of retrieved images. The pertinent features of a query brain image are extracted by applying third order moment inva...

  15. Building high dimensional imaging database for content based image search

    Science.gov (United States)

    Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo

    2016-03-01

    In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.

  16. Content Based Image Retrieval : Classification Using Neural Networks

    OpenAIRE

    Shereena V.B; Julie M.David

    2014-01-01

    In a content-based image retrieval system (CBIR), the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color corr...

  17. Content Based Image Retrieval using Color and Texture

    OpenAIRE

    Manimala Singha; K.Hemachandran

    2012-01-01

    The increased need of content based image retrieval technique can be found in a number of different domains such as Data Mining, Education, Medical Imaging, Crime Prevention, Weather forecasting, Remote Sensing and Management of Earth Resources. This paper presents the content based image retrieval, using features like texture and color, called WBCHIR (Wavelet Based Color Histogram Image Retrieval).The texture and color features are extracted through wavelet transformation and color histogr...

  18. PERFORMANCE EVALUATION OF CONTENT BASED IMAGE RETRIEVAL FOR MEDICAL IMAGES

    Directory of Open Access Journals (Sweden)

    SASI KUMAR. M

    2013-04-01

    Full Text Available Content-based image retrieval (CBIR technology benefits not only large image collections management, but also helps clinical care, biomedical research, and education. Digital images are found in X-Rays, MRI, CT which are used for diagnosing and planning treatment schedules. Thus, visual information management is challenging as the data quantity available is huge. Currently, available medical databases utilization is limited image retrieval issues. Archived digital medical images retrieval is always challenging and this is being researched more as images are of great importance in patient diagnosis, therapy, medical reference, and medical training. In this paper, an image matching scheme using Discrete Sine Transform for relevant feature extraction is presented. The efficiency of different algorithm for classifying the features to retrieve medical images is investigated.

  19. Multi Feature Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rajshree S. Dubey,

    2010-09-01

    Full Text Available There are numbers of methods prevailing for Image Mining Techniques. This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor. The nature of the Image is basically based on the Human Perception of the Image. The Machine interpretation of the Image is based on the Contours and surfaces of the Images. The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system. A combination of four feature extraction methods namely color istogram, Color Moment, texture, and Edge Histogram Descriptor. There is a provision to add new features in future for better retrievalefficiency. In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made .The user interface is provided by the Mat lab. The image properties analyzed in this work are by using computer vision and image processing algorithms. For colorthe histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD that is found. For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved.

  20. Advanced Methods for Localized Content Based Image Retrieval

    OpenAIRE

    Radhey Shyam; Pooja Srivastava

    2012-01-01

    Localized Content based image retrieval is an effective technique for image retrieval in large databases. It is the retrieval of images based on visual features such as color, texture and shape. In this paper, our desired content of an image is not holistic, but is localized. Specifically, we define Localized Content-Based Image Retrieval, where the user is only interested in a portion of the image, and the rest of the image is irrelevant. Some work already has been done in this direction. We...

  1. Survey paper on Sketch Based and Content Based Image Retrieval

    OpenAIRE

    Gaidhani, Prachi A.; S. B. Bagal

    2015-01-01

    This survey paper presents an overview of development of Sketch Based Image Retrieval (SBIR) and Content based image retrieval (CBIR) in the past few years. There is awful growth in bulk of images as well as the far-flung application in too many fields. The main attributes to represent as well index the images are color, shape, texture, spatial layout. These features of images are extracted to check similarity among the images. Generation of special query is the main problem of content based ...

  2. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    Science.gov (United States)

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  3. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  4. Secure content based image retrieval in medical databases

    OpenAIRE

    Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwénolé

    2015-01-01

    In this paper, we propose an implementation in the encrypted domain of a content based image retrieval (CBIR) method. It allows a physician to retrieve the most similar images to a query image in an outsourced database while preserving data confidentiality. Image retrieval is based on image signatures we build in the hormomorphically encrypted wavelet transform domain. Experimental results show it is possible to achieve retrieval performance as good as if images were processed nonencrypted.

  5. A Survey on Content Based Image Retrieval System Using HADOOP

    OpenAIRE

    Mrs. Urvashi Trivedi*; Mrs. Kishori Shekoker

    2014-01-01

    Content-based image retrieval (CBIR) - an application of computer vision technique, addresses the problem in searching for digital images in large databases. This emerging approach includes the Local Binary Pattern (LBP), Local Derivative Pattern (LDP), Local Ternary Pattern (LTP) and Magnitude Pattern. The ability to handle very large amounts of image data is important for image analysis and retrieval applications. With digital explosion of image databases over internet pose a ch...

  6. The Use of QBIC Content-Based Image Retrieval System

    OpenAIRE

    Ching-Yi Wu; Lih-Juan Chan Lin; Yuen-Hsien Tseng

    2004-01-01

    The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR) has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content) system through the Internet. Data from their information needs, behavio...

  7. Content-based retrieval based on binary vectors for 2-D medical images

    Institute of Scientific and Technical Information of China (English)

    龚鹏; 邹亚东; 洪海

    2003-01-01

    In medical research and clinical diagnosis, automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. To facilitate the decision-making in the health-care and the related areas, in this paper, a two-step content-based medical image retrieval algorithm is proposed. Firstly, in the preprocessing step, the image segmentation is performed to distinguish image objects, and on the basis of the ...

  8. Shape Measures for Content Based Image Retrieval: A Comparison.

    Science.gov (United States)

    Mehtre, Babu M.; And Others

    1997-01-01

    Explores the evaluation of image and multimedia information-retrieval systems, particularly the effectiveness of several shape measures for content-based retrieval of similar images. Shape feature measures, or vectors, are computed automatically and can either be used for retrieval or stored in the database for future queries. (57 references)…

  9. A Relevance Feedback Mechanism for Content-Based Image Retrieval.

    Science.gov (United States)

    Ciocca, G.; Schettini, R.

    1999-01-01

    Describes a relevance-feedback mechanism for content-based image retrieval that evaluates the feature distributions of the images judged relevant by the user and updates both the similarity measure and the query to accurately represent the user's information needs. (Author/LRW)

  10. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-11-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  11. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-10-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  12. Dissimilarity measures for content-based image retrieval

    OpenAIRE

    Hu, Rui; Rüger, Stefan; Song, Dawei; Liu, Haiming

    2008-01-01

    Dissimilarity measurement plays a crucial role in content-based image retrieval. In this paper, 16 core dissimilarity measures are introduced and evaluated. We carry out a systematic performance comparison on three image collections, Corel, Getty and Trecvid2003, with 7 different feature spaces. Two search scenarios are considered: single image queries based on the vector space model, and multi-image queries based on k-nearest neighbours search. A number of observations are drawn, which will ...

  13. Content-Based Image Retrieval Using Multiple Features

    OpenAIRE

    Zhang, Chi; Huang, Lei

    2014-01-01

    Algorithms of Content-Based Image Retrieval (CBIR) have been well developed along with the explosion of information. These algorithms are mainly distinguished based on feature used to describe the image content. In this paper, the algorithms that are based on color feature and texture feature for image retrieval will be presented. Color Coherence Vector based image retrieval algorithm is also attempted during the implementation process, but the best result is generated from the algorithms tha...

  14. Content Based Image Retrieval using Color and Texture

    Directory of Open Access Journals (Sweden)

    Manimala Singha

    2012-03-01

    Full Text Available The increased need of content based image retrieval technique can be found in a number of different domains such as Data Mining, Education, Medical Imaging, Crime Prevention, Weather forecasting, Remote Sensing and Management of Earth Resources. This paper presents the content based image retrieval, using features like texture and color, called WBCHIR (Wavelet Based Color Histogram Image Retrieval.The texture and color features are extracted through wavelet transformation and color histogram and the combination of these features is robust to scaling and translation of objects in an image. The proposed system has demonstrated a promising and faster retrieval method on a WANG image database containing 1000 general-purpose color images. The performance has been evaluated by comparing with the existing systems in the literature.

  15. Content-Based Image Retrial Based on Hadoop

    OpenAIRE

    DongSheng Yin; DeBo Liu

    2013-01-01

    Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is abl...

  16. Active index for content-based medical image retrieval.

    Science.gov (United States)

    Chang, S K

    1996-01-01

    This paper introduces the active index for content-based medical image retrieval. The dynamic nature of the active index is its most important characteristic. With an active index, we can effectively and efficiently handle smart images that respond to accessing, probing and other actions. The main applications of the active index are to prefetch image and multimedia data, and to facilitate similarity retrieval. The experimental active index system is described. PMID:8954230

  17. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  18. Towards Better Retrievals in Content -Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Kumar Vaibhava

    2014-04-01

    Full Text Available -This paper presents a Content-Based Image Retrieval (CBIR System called DEICBIR-2. The system retrieves images similar to a given query image by searching in the provided image database.Standard MPEG-7 image descriptors are used to find the relevant images which are similar to thegiven query image. Direct use of the MPEG-7 descriptors for creating the image database and retrieval on the basis of nearest neighbor does not yield accurate retrievals. To further improve the retrieval results, B-splines are used for ensuring smooth and continuous edges of the images in the edge-based descriptors. Relevance feedback is also implemented with user intervention. These additional features improve the retrieval performance of DEICBIR-2 significantly. Computational performance on a set of query images is presented and the performance of the proposed system is much superior to the performance of DEICBIR[9] on the same database and on the same set of query images.

  19. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  20. Content-based image retrieval in homomorphic encryption domain.

    Science.gov (United States)

    Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwenole

    2015-08-01

    In this paper, we propose a secure implementation of a content-based image retrieval (CBIR) method that makes possible diagnosis aid systems to work in externalized environment and with outsourced data as in cloud computing. This one works with homomorphic encrypted images from which it extracts wavelet based image features next used for subsequent image comparison. By doing so, our system allows a physician to retrieve the most similar images to a query image in an outsourced database while preserving data confidentiality. Our Secure CBIR is the first one that proposes to work with global image features extracted from encrypted images and does not induce extra communications in-between the client and the server. Experimental results show it achieves retrieval performance as good as if images were processed non-encrypted. PMID:26736909

  1. Topics in Content Based Image Retrieval : Fonts and Color Emotions

    OpenAIRE

    Solli, Martin

    2009-01-01

    Two novel contributions to Content Based Image Retrieval are presented and discussed. The first is a search engine for font recognition. The intended usage is the search in very large font databases. The input to the search engine is an image of a text line, and the output is the name of the font used when printing the text. After pre-processing and segmentation of the input image, a local approach is used, where features are calculated for individual characters. The method is based on eigeni...

  2. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  3. Content-based image retrieval, bildinhaltsbasiertes Suchen in grossen Bilddatenbanken

    OpenAIRE

    Muller, Henning; Squire, David; Muller, Wolfgang; Pun, Thierry

    1999-01-01

    Dieser Artikel beschreibt einen neuen Ansatz im Bereich des content-based image retrieval (CBIR), dem bildinhaltsbasierten Suchen in Bilddatenbanken in der Regel ohne Annotationen. Gegenüber den herkömmlichen meist vektorbasierten Verfahren werden hier Methoden des Text oder Information Retrieval (IR) an die speziellen Bedürfnisse des Empfangs von Bildern angepasst. Benutzerexperimente belegen die Leistungsfähigkeit und Flexibilität des Verfahrens.

  4. Content-based Image Retrieval by Information Theoretic Measure

    Directory of Open Access Journals (Sweden)

    Madasu Hanmandlu

    2011-09-01

    Full Text Available Content-based image retrieval focuses on intuitive and efficient methods for retrieving images from databases based on the content of the images. A new entropy function that serves as a measure of information content in an image termed as 'an information theoretic measure' is devised in this paper. Among the various query paradigms, 'query by example' (QBE is adopted to set a query image for retrieval from a large image database. In this paper, colour and texture features are extracted using the new entropy function and the dominant colour is considered as a visual feature for a particular set of images. Thus colour and texture features constitute the two-dimensional feature vector for indexing the images. The low dimensionality of the feature vector speeds up the atomic query. Indices in a large database system help retrieve the images relevant to the query image without looking at every image in the database. The entropy values of colour and texture and the dominant colour are considered for measuring the similarity. The utility of the proposed image retrieval system based on the information theoretic measures is demonstrated on a benchmark dataset.Defence Science Journal, 2011, 61(5, pp.415-430, DOI:http://dx.doi.org/10.14429/dsj.61.1177

  5. WISE: a content-based Web image search engine

    Science.gov (United States)

    Qiu, Guoping; Palmer, R. D.

    2000-12-01

    This paper describes the development of a prototype of a Web Image Search Engine (WISE), which allows users to search for images on the WWW by image examples, in a similar fashion to current search engines that allow users to find related Web pages using text matching on keywords. The system takes an image specified by the user and finds similar images available on the WWW by comparing the image contents using low level image features. The current version of the WISE system consists of a graphical user interface (GUI), an autonomous Web agent, an image comparison program and a query processing program. The users specify the URL of a target image and the URL of the starting Web page from where the program will 'crawl' the Web, finding images along the way and retrieve those satisfying a certain constraints. The program then computes the visual features of the retrieved images and performs content-based comparison with the target image. The results of the comparison are then sorted according to a certain similarity measure, which along with thumbnails and information associated with the images, such as the URLs; image size, etc. are then written to an HTML page. The resultant page is stored on a Web server and is outputted onto the user's Web browser once the search process is complete. A unique feature of the current version of WISE is its image content comparison algorithm. It is based on the comparison of image palettes and it therefore very efficient in retrieving one of the two universally accepted image formats on the Web, 'gif.' In gif images, the color palette is contained in its header and therefore it is only necessary to retrieve the header information rather than the whole images, thus making it very efficient.

  6. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  7. Multimedia Content Based Image Retrieval Iii: Local Tetra Pattern

    Directory of Open Access Journals (Sweden)

    Nagaraja G S

    2014-06-01

    Full Text Available Content Based Image Retrieval methods face several challenges while presentation of results and precision levels due to various specific applications. To improve the performance and address these problems a novel algorithm Local Tetra Pattern (LTrP is proposed which is coded in four direction instead of two direction used in Local Binary Pattern (LBP, Local Derivative Pattern (LDP andLocal Ternary Pattern(LTP.To retrieve the images the surrounding neighbor pixel value is calculated by gray level difference, which gives the relation between various multisorting algorithms using LBP, LDP, LTP and LTrP for sorting the images. This method mainly uses low level features such as color, texture and shape layout for image retrieval.

  8. Semi-automated query construction for content-based endomicroscopy video retrieval.

    Science.gov (United States)

    Tafreshi, Marzieh Kohandani; Linard, Nicolas; André, Barbara; Ayache, Nicholas; Vercauteren, Tom

    2014-01-01

    Content-based video retrieval has shown promising results to help physicians in their interpretation of medical videos in general and endomicroscopic ones in particular. Defining a relevant query for CBVR can however be a complex and time-consuming task for non-expert and even expert users. Indeed, uncut endomicroscopy videos may very well contain images corresponding to a variety of different tissue types. Using such uncut videos as queries may lead to drastic performance degradations for the system. In this study, we propose a semi-automated methodology that allows the physician to create meaningful and relevant queries in a simple and efficient manner. We believe that this will lead to more reproducible and more consistent results. The validation of our method is divided into two approaches. The first one is an indirect validation based on per video classification results with histopathological ground-truth. The second one is more direct and relies on perceived inter-video visual similarity ground-truth. We demonstrate that our proposed method significantly outperforms the approach with uncut videos and approaches the performance of a tedious manual query construction by an expert. Finally, we show that the similarity perceived between videos by experts is significantly correlated with the inter-video similarity distance computed by our retrieval system. PMID:25333105

  9. Relevance Feedback in Content Based Image Retrieval: A Review

    Directory of Open Access Journals (Sweden)

    Manesh B. Kokare

    2011-01-01

    Full Text Available This paper provides an overview of the technical achievements in the research area of relevance feedback (RF in content-based image retrieval (CBIR. Relevance feedback is a powerful technique in CBIR systems, in order to improve the performance of CBIR effectively. It is an open research area to the researcher to reduce the semantic gap between low-level features and high level concepts. The paper covers the current state of art of the research in relevance feedback in CBIR, various relevance feedback techniques and issues in relevance feedback are discussed in detail.

  10. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  11. Segmentation and Content-Based Watermarking for Color Image and Image Region Indexing and Retrieval

    OpenAIRE

    Mezaris Vasileios; Strintzis Michael G; Boulgouris Nikolaos V; Kompatsiaris Ioannis; Simitopoulos Dimitrios

    2002-01-01

    In this paper, an entirely novel approach to image indexing is presented using content-based watermarking. The proposed system uses color image segmentation and watermarking in order to facilitate content-based indexing, retrieval and manipulation of digital images and image regions. A novel segmentation algorithm is applied on reduced images and the resulting segmentation mask is embedded in the image using watermarking techniques. In each region of the image, indexing information is additi...

  12. Content-Based Image Retrieval for Semiconductor Process Characterization

    Directory of Open Access Journals (Sweden)

    Kenneth W. Tobin

    2002-07-01

    Full Text Available Image data management in the semiconductor manufacturing environment is becoming more problematic as the size of silicon wafers continues to increase, while the dimension of critical features continues to shrink. Fabricators rely on a growing host of image-generating inspection tools to monitor complex device manufacturing processes. These inspection tools include optical and laser scattering microscopy, confocal microscopy, scanning electron microscopy, and atomic force microscopy. The number of images that are being generated are on the order of 20,000 to 30,000 each week in some fabrication facilities today. Manufacturers currently maintain on the order of 500,000 images in their data management systems for extended periods of time. Gleaning the historical value from these large image repositories for yield improvement is difficult to accomplish using the standard database methods currently associated with these data sets (e.g., performing queries based on time and date, lot numbers, wafer identification numbers, etc.. Researchers at the Oak Ridge National Laboratory have developed and tested a content-based image retrieval technology that is specific to manufacturing environments. In this paper, we describe the feature representation of semiconductor defect images along with methods of indexing and retrieval, and results from initial field-testing in the semiconductor manufacturing environment.

  13. Global Descriptor Attributes Based Content Based Image Retrieval of Query Images

    OpenAIRE

    Jaykrishna Joshi; Dattatray Bade

    2015-01-01

    The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR) is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global des...

  14. Novel Approach to Content Based Image Retrieval Using Evolutionary Computing

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-08-01

    Full Text Available Content Based Image Retrieval (CBIR is an active research area in multimedia domain in this era of information technology. One of the challenges of CBIR is to bridge the gap between low level features and high level semantic. In this study we investigate the Particle Swarm Optimization (PSO, a stochastic algorithm and Genetic Algorithm (GA for CBIR to overcome this drawback. We proposed a new CBIR system based on the PSO and GA coupled with Support Vector Machine (SVM. GA and PSO both are evolutionary algorithms and in this study are used to increase the number of relevant images. SVM is used to perform final classification. To check the performance of the proposed technique, rich experiments are performed using coral dataset. The proposed technique achieves higher accuracy compared to the previously introduced techniques (FEI, FIRM, simplicity, simple HIST and WH.

  15. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    Science.gov (United States)

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening. PMID:25996728

  16. Content Based Image Retrieval Using Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    K. Harshini

    2012-10-01

    Full Text Available A computer application which automatically identifies or verifies a person from a digital image or a video frame from a video source, one of the ways to do this is by com-paring selected facial features from the image and a facial database. Content based image retrieval (CBIR, a technique for retrieving images on the basis of automatically derived features. This paper focuses on a low-dimensional feature based indexing technique for achieving efficient and effective retrieval performance. An appearance based face recognition method called singular value decomposition (SVD is proposed in this paper and is different from principal component analysis (PCA, which effectively considers only Euclidean structure of face space for analysis which lead to poor classification performance in case of great facial variations such as expression, lighting, occlusion and so on, due to the fact the image gray value matrices on which they manipulate are very sensitive to these facial variations. We consider the fact that every image matrix can always have the well known singular value decomposition (SVD and can be regarded as a composition of a set of base images generated by SVD and we further point out that base images are sensitive to the composition of face image. Finally our experimental results show that SVD has the advantage of providing a better representation and achieves lower error rates in face recognition but it has the disadvantage that it drags the performance evaluation. So, in order to overcome that, we conducted experiments by introducing a controlling parameter ‘α’, which ranges from 0 to 1, and we achieved better results for α=0.4 when compared with the other values of ‘α’. Key words: Singular value decomposition (SVD, Euclidean distance, original gray value matrix (OGVM.

  17. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  18. Dominant color correlogram descriptor for content-based image retrieval

    Science.gov (United States)

    Fierro-Radilla, Atoany; Perez-Daniel, Karina; Nakano-Miyatake, Mariko; Benois, Jenny

    2015-03-01

    Content-based image retrieval (CBIR) has become an interesting and urgent research topic due to the increase of necessity of indexing and classification of multimedia content in large databases. The low level visual descriptors, such as color-based, texture-based and shape-based descriptors, have been used for the CBIR task. In this paper we propose a color-based descriptor which describes well image contents, integrating both global feature provided by dominant color and local features provided by color correlogram. The performance of the proposed descriptor, called Dominant Color Correlogram descriptor (DCCD), is evaluated comparing with some MPEG-7 visual descriptors and other color-based descriptors reported in the literature, using two image datasets with different size and contents. The performance of the proposed descriptor is assessed using three different metrics commonly used in image retrieval task, which are ARP (Average Retrieval Precision), ARR (Average Retrieval Rate) and ANMRR (Average Normalized Modified Retrieval Rank). Also precision-recall curves are provided to show a better performance of the proposed descriptor compared with other color-based descriptors.

  19. Automatic organ segmentation on torso CT images by using content-based image retrieval

    Science.gov (United States)

    Zhou, Xiangrong; Watanabe, Atsuto; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-02-01

    This paper presents a fast and robust segmentation scheme that automatically identifies and extracts a massive-organ region on torso CT images. In contrast to the conventional algorithms that are designed empirically for segmenting a specific organ based on traditional image processing techniques, the proposed scheme uses a fully data-driven approach to accomplish a universal solution for segmenting the different massive-organ regions on CT images. Our scheme includes three processing steps: machine-learning-based organ localization, content-based image (reference) retrieval, and atlas-based organ segmentation techniques. We applied this scheme to automatic segmentations of heart, liver, spleen, left and right kidney regions on non-contrast CT images respectively, which are still difficult tasks for traditional segmentation algorithms. The segmentation results of these organs are compared with the ground truth that manually identified by a medical expert. The Jaccard similarity coefficient between the ground truth and automated segmentation result centered on 67% for heart, 81% for liver, 78% for spleen, 75% for left kidney, and 77% for right kidney. The usefulness of our proposed scheme was confirmed.

  20. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  1. Automating the construction of scene classifiers for content-based video retrieval

    OpenAIRE

    Israël, Menno; Broek, van den, L.A.M.; Putten, van, B.; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classific...

  2. Global Descriptor Attributes Based Content Based Image Retrieval of Query Images

    Directory of Open Access Journals (Sweden)

    Jaykrishna Joshi

    2015-02-01

    Full Text Available The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global descriptor attributes, where global features are extracted to make system more efficient by using color features which are color expectancy, color variance, skewness and texture feature correlation.

  3. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    Science.gov (United States)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  4. Toward content-based image retrieval with deep convolutional neural networks

    Science.gov (United States)

    Sklan, Judah E. S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.

    2015-03-01

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128x128 to an output encoded layer of 4x384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This preliminary effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  5. A NEW CONTENT BASED IMAGE RETRIEVAL SYSTEM USING GMM AND RELEVANCE FEEDBACK

    OpenAIRE

    N. Shanmugapriya; Nallusamy, R

    2014-01-01

    Content-Based Image Retrieval (CBIR) is also known as Query By Image Content (QBIC) is the application of computer vision techniques and it gives solution to the image retrieval problem such as searching digital images in large databases. The need to have a versatile and general purpose Content Based Image Retrieval (CBIR) system for a very large image database has attracted focus of many researchers of information-technology-giants and leading academic institutions for development of CBIR te...

  6. Survey on Sparse Coded Features for Content Based Face Image Retrieval

    OpenAIRE

    Johnvictor, D.; Selvavinayagam, G.

    2014-01-01

    Content based image retrieval, a technique which uses visual contents of image to search images from large scale image databases according to users' interests. This paper provides a comprehensive survey on recent technology used in the area of content based face image retrieval. Nowadays digital devices and photo sharing sites are getting more popularity, large human face photos are available in database. Multiple types of facial features are used to represent discriminality on large scale hu...

  7. A GENERIC APPROACH TO CONTENT BASED IMAGE RETRIEVAL USING DCT AND CLASSIFICATION TECHNIQUES

    OpenAIRE

    RAMESH BABU DURAI C; Dr.V.DURAISAMY

    2010-01-01

    With the rapid development of technology, the traditional information retrieval techniques based on keywords are not sufficient, content - based image retrieval (CBIR) has been an active research topic.Content Based Image Retrieval (CBIR) technologies provide a method to find images in large databases by using unique descriptors from a trained image. The ability of the system to classify images based on the training set feature extraction is quite challenging.In this paper we propose to extra...

  8. Text based approaches for content-based image retrieval on large image collections

    OpenAIRE

    Wilkins, Peter,; Ferguson, Paul; Smeaton, Alan F.; Gurrin, Cathal

    2005-01-01

    As the growth of digital image collections continues so does the need for efficient content based searching of images capable of providing quality results within a search time that is acceptable to users who have grown used text search engine performance. Some existing techniques, whilst being capable of providing relevant results to a user's query will not scale up to very large image collections, the order of which will be in the millions. In this paper we propose a technique that uses t...

  9. Object extraction as a basic process for content-based image retrieval (CBIR) system

    Science.gov (United States)

    Jaworska, T.

    2007-12-01

    This article describes the way in which image is prepared for content-based image retrieval system. Automated image extraction is crucial; especially, if we take into consideration the fact that the feature selection is still a task performed by human domain experts and represents a major stumbling block in the process of creating fully autonomous CBIR systems. Our CBIR system is dedicated to support estate agents. In the database, there are images of houses and bungalows. We put all our efforts into extracting elements from an image and finding their characteristic features in the unsupervised way. Hence, the paper presents segmentation algorithm based on a pixel colour in RGB colour space. Next, it presents the method of object extraction applied to obtain separate objects prepared for the process of introducing them into database and further recognition. Moreover, we present a novel method of texture identification which is based on wavelet transformation. Due to the fact that the majority of texture is geometrical (such as bricks and tiles) we have used the Haar wavelet. After a set of low-level features for all objects is computed, the database is stored with these features.

  10. Automatic texture segmentation for content-based image retrieval application

    OpenAIRE

    Fauzi, M.F.A.; Lewis, P. H.

    2006-01-01

    In this article, a brief review on texture segmentation is presented, before a novel automatic texture segmentation algorithm is developed. The algorithm is based on a modified discrete wavelet frames and the mean shift algorithm. The proposed technique is tested on a range of textured images including composite texture images, synthetic texture images, real scene images as well as our main source of images, the museum images of various kinds. An extension to the automatic texture segmentatio...

  11. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    OpenAIRE

    Hamid A. Jalab; Nor Aniza Abdullah

    2013-01-01

    Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering eac...

  12. Content Based Image Retrieval with Mobile Agents and Steganography

    OpenAIRE

    Thampi, Sabu M.; Sekaran, K. Chandra

    2004-01-01

    In this paper we present an image retrieval system based on Gabor texture features, steganography, and mobile agents.. By employing the information hiding technique, the image attributes can be hidden in an image without degrading the image quality. Thus the image retrieval process becomes simple. Java based mobile agents manage the query phase of the system. Based on the simulation results, the proposed system not only shows the efficiency in hiding the attributes but also provides other adv...

  13. Local Content Based Image Authentication for Tamper Localization

    Directory of Open Access Journals (Sweden)

    L. Sumalatha

    2012-09-01

    Full Text Available Digital images make up a large component in the multimedia information. Hence Image authentication has attained a great importance and lead to the development of several image authentication algorithms. This paper proposes a block based watermarking scheme for image authentication based on the edge information extracted from each block. A signature is calculated from each edge block of the image using simple hash function and inserted in the same block. The proposed local edge based content hash (LECH scheme extracts the original image without any distortion from the marked image after the hidden data have been extracted. It can also detect and localize tampered areas of the watermarked image. Experimental results demonstrate the validity of the proposed scheme.

  14. Fuzzy Content-Based Retrieval in Image Databases.

    Science.gov (United States)

    Wu, Jian Kang; Narasimhalu, A. Desai

    1998-01-01

    Proposes a fuzzy-image database model and a concept of fuzzy space; describes fuzzy-query processing in fuzzy space and fuzzy indexing on complete fuzzy vectors; and uses an example image database, the computer-aided facial-image inference and retrieval system (CAFIIR), for explanation throughout. (Author/LRW)

  15. Content Based Image Retrieval and Information Theory: A General Approach.

    Science.gov (United States)

    Zachary, John; Iyengar, S. S.; Barhen, Jacob

    2001-01-01

    Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…

  16. Content Based Image Retrieval using Hierarchical and K-Means Clustering Techniques

    OpenAIRE

    V.S.V.S. Murthy; E.Vamsidhar; J.N.V.R SWARUP KUMAR; P.Sankara Rao

    2010-01-01

    In this paper we present an image retrieval system that takes an image as the input query and retrieves images based on image content. Content Based Image Retrieval is an approach for retrieving semantically-relevant images from an image database based on automatically-derived image features. The unique aspect of the system is the utilization of hierarchical and k-means clustering techniques. The proposed procedure consists of two stages. First, here we are going to filter most of the images ...

  17. Content-Based Image Retrieval Using a Composite Color-Shape Approach.

    Science.gov (United States)

    Mehtre, Babu M.; Kankanhalli, Mohan S.; Lee, Wing Foon

    1998-01-01

    Proposes a composite feature measure which combines the shape and color features of an image based on a clustering technique. A similarity measure computes the degree of match between a given pair of images; this technique can be used for content-based image retrieval of images using shape and/or color. Tests the technique on two image databases;…

  18. A NEW CONTENT BASED IMAGE RETRIEVAL SYSTEM USING GMM AND RELEVANCE FEEDBACK

    Directory of Open Access Journals (Sweden)

    N. Shanmugapriya

    2014-01-01

    Full Text Available Content-Based Image Retrieval (CBIR is also known as Query By Image Content (QBIC is the application of computer vision techniques and it gives solution to the image retrieval problem such as searching digital images in large databases. The need to have a versatile and general purpose Content Based Image Retrieval (CBIR system for a very large image database has attracted focus of many researchers of information-technology-giants and leading academic institutions for development of CBIR techniques. Due to the development of network and multimedia technologies, users are not fulfilled by the traditional information retrieval techniques. So nowadays the Content Based Image Retrieval (CBIR are becoming a source of exact and fast retrieval. Texture and color are the important features of Content Based Image Retrieval Systems. In the proposed method, images can be retrieved using color-based, texture-based and color and texture-based. Algorithms such as auto color correlogram and correlation for extracting color based images, Gaussian mixture models for extracting texture based images. In this study, Query point movement is used as a relevance feedback technique for Content Based Image Retrieval systems. Thus the proposed method achieves better performance and accuracy in retrieving images.

  19. Investigation of crop nitrogen content based on image processing technologies

    Science.gov (United States)

    Zhang, Yane; Li, Minzan; Xu, Zenghui; Zhang, Xijie; Wang, Maohua

    2005-08-01

    A special image sampler was developed to non-destructively take leaf images of cucumber plants in greenhouse, which were grown in different nutrient conditions in order to obtain nitrogen stress to the crop. Then the correlation between nitrogen content of cucumber leaf and image property of the leaf was analyzed. The sampler is composed of eight lamps, a half sphere shell, a platform, and a window used for fixing camera. The lamps were arranged around the platform on what leafs would be placed for image-taking. The half sphere shell was over the platform to reflect the light of lamps. Since the reflected light from the shell was diffuse and symmetrical, the reflection noise of the leaf could be reduced and the high quality image could be obtained. The correlation analysis between leaf images and nitrogen contents of leaves was conducted based on RGB mode and HSI mode. In RGB mode the G weight of the image showed the highest linear correlation with the nitrogen content of the cucumber leaf than R weight and B weight, while in HSI mode the hue showed the same high linear correlation as the G weight. A new index from the G weight of RGB mode and the hue of HSI mode was suggested to estimate nitrogen content of cucumber leaf. The result shows the new index is practical.

  20. Content-based Image Retrieval Using Color Histogram

    Institute of Scientific and Technical Information of China (English)

    HUANG Wen-bei; HE Liang; GU Jun-zhong

    2006-01-01

    This paper introduces the principles of using color histogram to match images in CBIR. And a prototype CBIR system is designed with color matching function. A new method using 2-dimensional color histogram based on hue and saturation to extract and represent color information of an image is presented. We also improve the Euclidean-distance algorithm by adding Center of Color to it. The experiment shows modifications made to Euclidean-distance significantly elevate the quality and efficiency of retrieval.

  1. Content-based image retrieval applied to BI-RADS tissue classification in screening mammography

    OpenAIRE

    2011-01-01

    AIM: To present a content-based image retrieval (CBIR) system that supports the classification of breast tissue density and can be used in the processing chain to adapt parameters for lesion segmentation and classification.

  2. Indexing, learning and content-based retrieval for special purpose image databases

    OpenAIRE

    Huiskes, Mark; Pauwels, Eric

    2004-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by taking a tour along the entire

  3. Performance Evaluation of Content Based Image Retrieval on Feature Optimization and Selection Using Swarm Intelligence

    OpenAIRE

    Kirti Jain; Dr. Sarita Singh Bhadauria

    2016-01-01

    The diversity and applicability of swarm intelligence is increasing everyday in the fields of science and engineering. Swarm intelligence gives the features of the dynamic features optimization concept. We have used swarm intelligence for the process of feature optimization and feature selection for content-based image retrieval. The performance of content-based image retrieval faced the problem of precision and recall. The value of precision and recall depends on the retrieval capacity of th...

  4. Content-based retrieval of remote sensed images using a feature-based approach

    Science.gov (United States)

    Vellaikal, Asha; Dao, Son; Kuo, C.-C. Jay

    1995-01-01

    A feature-based representation model for content-based retrieval from a remote sensed image database is described in this work. The representation is formed by clustering spatially local pixels, and the cluster features are used to process several types of queries which are expected to occur frequently in the context of remote sensed image retrieval. Preliminary experimental results show that the feature-based representation provides a very promising tool for content-based access.

  5. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  6. Content Based Image Retrieval Using Embedded Neural Networks with Bandletized Regions

    OpenAIRE

    Rehan Ashraf; Khalid Bashir; Aun Irtaza; Muhammad Tariq Mahmood

    2015-01-01

    One of the major requirements of content based image retrieval (CBIR) systems is to ensure meaningful image retrieval against query images. The performance of these systems is severely degraded by the inclusion of image content which does not contain the objects of interest in an image during the image representation phase. Segmentation of the images is considered as a solution but there is no technique that can guarantee the object extraction in a robust way. Another limitation of the segmen...

  7. An Efficient and Generalized approach for Content Based Image Retrieval in MatLab.

    OpenAIRE

    Shriram K V; P.L.K Priyadarsini; Subashri V

    2012-01-01

    There is a serious flaw in existing image search engines, since they basically work under the influence of keywords. Retrieving images based on the keywords is not only inappropriate, but also time consuming. Content Based Image Retrieval (CBIR) is still a research area, which aims to retrieve images based on the content of the query image. In this paper we have proposed a CBIR based image retrieval system, which analyses innate properties of an image such as, the color, texture and the entr...

  8. Content-Based Image Retrieval Using Texture Color Shape and Region

    OpenAIRE

    Syed Hamad Shirazi; Arif Iqbal Umar; Saeeda Naz; Noor ul Amin Khan; Muhammad Imran Razzak; Bandar AlHaqbani

    2016-01-01

    Interests to accurately retrieve required images from databases of digital images are growing day by day. Images are represented by certain features to facilitate accurate retrieval of the required images. These features include Texture, Color, Shape and Region. It is a hot research area and researchers have developed many techniques to use these feature for accurate retrieval of required images from the databases. In this paper we present a literature survey of the Content Based Image Retrie...

  9. Dominant Color and Texture approach for Content based Video Images Retrieval

    Directory of Open Access Journals (Sweden)

    Dr.Ajay A. Gurjar

    2015-03-01

    Full Text Available A Fast and Specific retrieval of information from the large collections of publicly available data as per the user’s requirement is the intense need of the today’s world. Various search engines use textual approach to retrieve video data for the user which has very high error rate. By taking the discrepancies present in the text based video retrieval systems into account, user’s shift to Content based video retrieval systems. Due to the proliferation of video and image data in digital form, Content-based video images retrieval has become a prominent research topic. Now a day’s people are interested in using digital images. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color, texture and shape are three important visual features of an image. Searching for images using color, texture and shape features has attracted much attention. There are many content based image retrieval techniques in the literature. This paper will give the overview of different existing methods used for content based video images retrieval and also suggest an efficient image retrieval technique using dynamic dominant color, texture and shape features of an image which will give an effective retrieval result.

  10. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Broek, van den Egon L.; Kisters, Peter M.F.; Vuurpijl, Louis G.; Eggen, Berry; Veer, van der Gerrit; Willems, Rob

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR inte

  11. A Content-based search engine on medical images for telemedicine

    OpenAIRE

    Lee, CH; Ng, V; Cheung, DWL

    1997-01-01

    Retrieving images by content and forming visual queries are important functionality of an image database system. Using textual descriptions to specify queries on image content is another important component of content-based search. The authors describe a medical image database system MIQS which supports visual queries such as query by example and query by sketch. In addition, it supports textual queries on spatial relationships between the objects of an image. MIQS is designed as a client-ser...

  12. Content Based Image Retrieval using Color Boosted Salient Points and Shape features of an image.

    Directory of Open Access Journals (Sweden)

    Hiremath P. S

    2008-02-01

    Full Text Available Salient points are locations in an image where there is a significant variation withrespect to a chosen image feature. Since the set of salient points in an imagecapture important local characteristics of that image, they can form the basis of agood image representation for content-based image retrieval (CBIR. Salientfeatures are generally determined from the local differential structure of images.They focus on the shape saliency of the local neighborhood. Most of thesedetectors are luminance based which have the disadvantage that thedistinctiveness of the local color information is completely ignored in determiningsalient image features. To fully exploit the possibilities of salient point detection incolor images, color distinctiveness should be taken into account in addition toshape distinctiveness. This paper presents a method for salient pointsdetermination based on color saliency. The color and texture information aroundthese points of interest serve as the local descriptors of the image. In addition,the shape information is captured in terms of edge images computed usingGradient Vector Flow fields. Invariant moments are then used to record theshape features. The combination of the local color, texture and the global shapefeatures provides a robust feature set for image retrieval. The experimentalresults demonstrate the efficacy of the method.

  13. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    OpenAIRE

    Manyu Xiao; Jianghu Lu; Gongnan Xie

    2013-01-01

    Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency) and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and ...

  14. Adapting content-based image retrieval techniques for the semantic annotation of medical images.

    Science.gov (United States)

    Kumar, Ashnil; Dyer, Shane; Kim, Jinman; Li, Changyang; Leong, Philip H W; Fulham, Michael; Feng, Dagan

    2016-04-01

    The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images. PMID:26890880

  15. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  16. A Content-Based Parallel Image Retrieval System on Cluster Architectures

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bing; SHEN Jun-yi; PENG Qin-ke

    2004-01-01

    We propose a content-based parallel image retrieval system to achieve high responding ability.Our system is developed on cluster architectures.It has several retrieval servers to supply the service of content-based image retrieval.It adopts the Browser/Server (B/S) mode.The users could visit our system though web pages.It uses the symmetrical color-spatial features (SCSF) to represent the content of an image.The SCSF is effective and efficient for image matching because it is independent of image distortion such as rotation and flip as well as it increases the matching accuracy.The SCSF was organized by M-tree, which could speedup the searching procedure.Our experiments show that the image matching is quickly and efficiently with the use of SCSF.And with the support of several retrieval servers, the system could respond to many users at mean time.

  17. Application of image visual characterization and soft feature selection in content-based image retrieval

    Science.gov (United States)

    Jarrah, Kambiz; Kyan, Matthew; Lee, Ivan; Guan, Ling

    2006-01-01

    Fourier descriptors (FFT) and Hu's seven moment invariants (HSMI) are among the most popular shape-based image descriptors and have been used in various applications, such as recognition, indexing, and retrieval. In this work, we propose to use the invariance properties of Hu's seven moment invariants, as shape feature descriptors, for relevance identification in content-based image retrieval (CBIR) systems. The purpose of relevance identification is to find a collection of images that are statistically similar to, or match with, an original query image from within a large visual database. An automatic relevance identification module in the search engine is structured around an unsupervised learning algorithm, the self-organizing tree map (SOTM). In this paper we also proposed a new ranking function in the structure of the SOTM that exponentially ranks the retrieved images based on their similarities with respect to the query image. Furthermore, we propose to extend our studies to optimize the contribution of individual feature descriptors for enhancing the retrieval results. The proposed CBIR system is compatible with the different architectures of other CBIR systems in terms of its ability to adapt to different similarity matching algorithms for relevance identification purposes, whilst offering flexibility of choice for alternative optimization and weight estimation techniques. Experimental results demonstrate the satisfactory performance of the proposed CBIR system.

  18. Correspondence Analysis and Hierarchical Indexing For Content-Based Image Retrieval

    OpenAIRE

    Milanese, Ruggero; Squire, David; Pun, Thierry

    1996-01-01

    This paper describes a two-stage statistical approach supporting content-based search in image databases. The first stage performs correspondence analysis, a factor analysis method transforming image attributes into a reduced-size, uncorrelated factor space. The second stage performs ascendant heirarchical classification, an iterative clustering method which constructs a heirarchical index structure for the images of the database. Experimental results supporting the applicability of both tech...

  19. Design of Content-Based Retrieval System in Remote Sensing Image Database

    Institute of Scientific and Technical Information of China (English)

    LI Feng; ZENG Zhiming; HU Yanfeng; FU Kun

    2006-01-01

    To retrieve the object region efficaciously from massive remote sensing image database, a model for content-based retrieval of remote sensing image is given according to the characters of remote sensing image application firstly, and then the algorithm adopted for feature extraction and multidimensional indexing, and relevance feedback by this model are analyzed in detail. Finally, the contents intending to be researched about this model are proposed.

  20. Content-based image retrieval in picture archiving and communications systems

    OpenAIRE

    Qi, Hairong; Snyder, Wesley E.

    1999-01-01

    We propose the concept of content-based image retrieval (CBIR) and demonstrate its potential use in picture archival and communication system (PACS). We address the importance of image retrieval in PACS and highlight the drawbacks existing in traditional textual-based retrieval. We use a digital mammogram database as our testing data to illustrate the idea of CBIR, where retrieval is carried out based on object shape, size, and brightness histogram. With a user-supplied query image, the syste...

  1. A Comparative Study of Content Based Image Retrieval Trends and Approaches

    Directory of Open Access Journals (Sweden)

    Satish Tunga

    2015-05-01

    Full Text Available Content Based Image Retrieval (CBIR is an important step in addressing image storage and management problems. Latest image technology improvements along with the Internet growth have led to a huge amount of digital multimedia during the recent decades. Various methods, algorithms and systems have been proposed to solve these problems. Such studies revealed the indexing and retrieval concepts, which have further evolved to Content-Based Image Retrieval. CBIR systems often analyze image content via the so-called low-level features for indexing and retrieval, such as color, texture and shape. In order to achieve significantly higher semantic performance, recent systems seek to combine low-level with high-level features that contain perceptual information for human. Purpose of this review is to identify the set of methods that have been used for CBR and also to discuss some of the key contributions in the current decade related to image retrieval and main challenges involved in the adaptation of existing image retrieval techniques to build useful systems that can handle real-world data. By making use of various CBIR approaches accurate, repeatable, quantitative data must be efficiently extracted in order to improve the retrieval accuracy of content-based image retrieval systems. In this paper, various approaches of CBIR and available algorithms are reviewed. Comparative results of various techniques are presented and their advantages, disadvantages and limitations are discussed.

  2. A picture is worth a thousand words : content-based image retrieval techniques

    NARCIS (Netherlands)

    Thomée, Bart

    2010-01-01

    In my dissertation I investigate techniques for improving the state of the art in content-based image retrieval. To place my work into context, I highlight the current trends and challenges in my field by analyzing over 200 recent articles. Next, I propose a novel paradigm called ‘artificial imagina

  3. Content-Based Medical Image Retrieval Based on Image Feature Projection in Relevance Feedback Level

    Directory of Open Access Journals (Sweden)

    Mohammad Behnam

    2014-04-01

    Full Text Available The purpose of this study is to design a content-based medical image retrieval system and provide a new method to reduce semantic gap between visual features and semantic concepts. Generally performance of the retrieval systems based on only visual contents decrease because these features often fail to describe the high level semantic concepts in user’s mind. In this paper this problem is solved using a new approach based on projection of relevant and irrelevant images in to a new space with low dimensionality and less overlapping in relevance feedback level. For this purpose, first we change the feature space using Principal Component Analysis (PCA and Linear Discriminant Analysis (LDA techniques and then classify the feedback images applying Support Vector Machine (SVM classifier. The proposed framework has been evaluated on a database consisting of 10,000 medical X-ray images of 57 semantic classes. The obtained results show that the proposed approach significantly improves the accuracy of retrieval system.

  4. Content-Based Medical Image Retrieval Based on Image Feature Projection in Relevance Feedback Level

    Directory of Open Access Journals (Sweden)

    Mohammad Behnam

    2014-09-01

    Full Text Available The purpose of this study is to design a content-based medical image retrieval system and provide a new method to reduce semantic gap between visual features and semantic concepts. Generally performance of the retrieval systems based on only visual contents decrease because these features often fail to describe the high level semantic concepts in user’s mind. In this paper this problem is solved using a new approach based on projection of relevant and irrelevant images in to a new space with low dimensionality and less overlapping in relevance feedback level. For this purpose, first we change the feature space using Principal Component Analysis (PCA and Linear Discriminant Analysis (LDA techniques and then classify the feedback images applying Support Vector Machine (SVM classifier. The proposed framework has been evaluated on a database consisting of 10,000 medical X-ray images of 57 semantic classes. The obtained results show that the proposed approach significantly improves the accuracy of retrieval system.

  5. System architecture of a web service for Content-Based Image Retrieval

    OpenAIRE

    Giró Nieto, Xavier; Ventura, Carles; Pont Tuset, Jordi; Cortés Yuste, Silvia; Marqués Acosta, Fernando

    2010-01-01

    This paper presents the system architecture of a Content- Based Image Retrieval system implemented as a web service. The proposed solution is composed of two parts, a client run- ning a graphical user interface for query formulation and a server where the search engine explores an image repository. The separation of the user interface and the search engine follows a Service as a Software (SaaS) model, a type of cloud computing design where a single core system is online a...

  6. Hierarchical content-based image retrieval by dynamic indexing and guided search

    Science.gov (United States)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  7. Content based image retrieval using local binary pattern operator and data mining techniques.

    Science.gov (United States)

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used. PMID:25991105

  8. Multi technique amalgamation for enhanced information identification with content based image data.

    Science.gov (United States)

    Das, Rik; Thepade, Sudeep; Ghosh, Saurav

    2015-01-01

    Image data has emerged as a resourceful foundation for information with proliferation of image capturing devices and social media. Diverse applications of images in areas including biomedicine, military, commerce, education have resulted in huge image repositories. Semantically analogous images can be fruitfully recognized by means of content based image identification. However, the success of the technique has been largely dependent on extraction of robust feature vectors from the image content. The paper has introduced three different techniques of content based feature extraction based on image binarization, image transform and morphological operator respectively. The techniques were tested with four public datasets namely, Wang Dataset, Oliva Torralba (OT Scene) Dataset, Corel Dataset and Caltech Dataset. The multi technique feature extraction process was further integrated for decision fusion of image identification to boost up the recognition rate. Classification result with the proposed technique has shown an average increase of 14.5 % in Precision compared to the existing techniques and the retrieval result with the introduced technique has shown an average increase of 6.54 % in Precision over state-of-the art techniques. PMID:26798574

  9. Automated ship image acquisition

    Science.gov (United States)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  10. Prospective Study for Semantic Inter-Media Fusion in Content-Based Medical Image Retrieval

    OpenAIRE

    Teodorescu, Roxana; Racoceanu, Daniel; Leow, Wee-Kheng; Cretu, Vladimir

    2008-01-01

    One important challenge in modern Content-Based Medical Image Retrieval (CBMIR) approaches is represented by the semantic gap, related to the complexity of the medical knowledge. Among the methods that are able to close this gap in CBMIR, the use of medical thesauri/ontologies has interesting perspectives due to the possibility of accessing on-line updated relevant webservices and to extract real-time medical semantic structured information. The CBMIR approach proposed in this paper uses the ...

  11. Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval

    International Nuclear Information System (INIS)

    . Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. Results: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors’ proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors’ feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. Conclusions: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

  12. An Efficient and Generalized approach for Content Based Image Retrieval in MatLab.

    Directory of Open Access Journals (Sweden)

    Shriram K V

    2012-05-01

    Full Text Available There is a serious flaw in existing image search engines, since they basically work under the influence of keywords. Retrieving images based on the keywords is not only inappropriate, but also time consuming. Content Based Image Retrieval (CBIR is still a research area, which aims to retrieve images based on the content of the query image. In this paper we have proposed a CBIR based image retrieval system, which analyses innate properties of an image such as, the color, texture and the entropy factor, for efficient and meaningful image retrieval. The initial step is to retrieve images based on the color combination of the query image, which is followed by the texture based retrieval and finally, based on the entropy of the images, the results are filtered. The proposed system results in retrieving the images from the database which are similar to the query image. Entropy based image retrieval proved to be quite useful in filtering the irrelevant images thereby improving the efficiency of the system.

  13. Texture based feature extraction methods for content based medical image retrieval systems.

    Science.gov (United States)

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method. PMID:25227014

  14. AN EFFICIENT/ENHANCED CONTENT BASED IMAGE RETRIEVAL FOR A COMPUTATIONAL ENGINE

    Directory of Open Access Journals (Sweden)

    K. V. Shriram

    2014-01-01

    Full Text Available A picture or image is worth a thousand words. It is very much pertinent to the field of image processing. In the recent years, much advancement in VLSI technologies has triggered the abundant availability of powerful processors in the market. With the prices of RAM are having come down, the databases could be used to store information on the about art works, medical images like CT scan, satellite images, nature photography, album images, images of convicts i.e., criminals for security purpose, giving rise to a massive data having a diverse image set collection. This leads us to the problem of relevant image retrieval from a huge database having diverse image set collection. Web search engines are always expected to deliver flawless results in a short span of time including accuracy and speed. An image search engine also comes under the same roof. The results of an image search should match with the best available image from in the database. Content Based Image Retrieval (CBIR has been proposed to enable these image search engines with impeccable results. In this CBIR technology, using only color and texture as parameters for zeroing in on an imagemay not help in fetching the best result. Also most of the existing systems uses keyword based search which could yield inappropriate results. All the above mentioned drawbacks in CBIR have been addressed in this research. A complete analysis of CBIR including a combination of features has been carried out, implemented and tested.

  15. Applying Content-Based Image Retrieval Techniques to Provide New Services for Tourism Industry

    Directory of Open Access Journals (Sweden)

    Zobeir Raisi

    2014-09-01

    Full Text Available The aim of this paper is to use the network and internet and also to apply the content based image retrieval techniques to provide new services for tourism industry. The assumption is a tourist faces an interesting subject; he or she can take an image of subject by a handheld device and send it to the server as query image of CBIR. In the server, images similar to the query are retrieved and results are returned to the handheld device to be shown on a web browser. Then, the tourist can access the useful information about the subject by clicking on one of the retrieved images. For this purpose, a tourism database is created. Then several particular content-based image retrieval techniques are selected and applied to the database. Among these techniques, ‘Edge Histogram Descriptor (EHD’ and ‘Color layout descriptor (CLD’ algorithms have better retrieval performances than the others. By combining and modification of these two methods, a new CBIR algorithm is proposed for this application. Simulation results show a high retrieval performance for the proposed algorithm.

  16. Evaluation of shape indexing methods for content-based retrieval of x-ray images

    Science.gov (United States)

    Antani, Sameer; Long, L. Rodney; Thoma, George R.; Lee, Dah-Jye

    2003-01-01

    Efficient content-based image retrieval of biomedical images is a challenging problem of growing research interest. Feature representation algorithms used in indexing medical images on the pathology of interest have to address conflicting goals of reducing feature dimensionality while retaining important and often subtle biomedical features. At the Lister Hill National Center for Biomedical Communications, a R&D division of the National Library of Medicine, we are developing a content-based image retrieval system for digitized images of a collection of 17,000 cervical and lumbar x-rays taken as a part of the second National Health and Nutrition Examination Survey (NHANES II). Shape is the only feature that effectively describes various pathologies identified by medical experts as being consistently and reliably found in the image collection. In order to determine if the state of the art in shape representation methods is suitable for this application, we have evaluated representative algorithms selected from the literature. The algorithms were tested on a subset of 250 vertebral shapes. In this paper we present the requirements of an ideal algorithm, define the evaluation criteria, and present the results and our analysis of the evaluation. We observe that while the shape methods perform well on visual inspection of the overall shape boundaries, they fall short in meeting the needs of determining similarity between the vertebral shapes based on the pathology.

  17. Kernel Density Feature Points Estimator for Content-Based Image Retrieval

    CERN Document Server

    Zuva, Tranos; Ojo, Sunday O; Ngwira, Seleman M

    2012-01-01

    Research is taking place to find effective algorithms for content-based image representation and description. There is a substantial amount of algorithms available that use visual features (color, shape, texture). Shape feature has attracted much attention from researchers that there are many shape representation and description algorithms in literature. These shape image representation and description algorithms are usually not application independent or robust, making them undesirable for generic shape description. This paper presents an object shape representation using Kernel Density Feature Points Estimator (KDFPE). In this method, the density of feature points within defined rings around the centroid of the image is obtained. The KDFPE is then applied to the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of image representation shows improved retrieval rate when compared to Density Histogram Feature Points (DHFP) method. Analytic analysis is done to justify our m...

  18. A COMPARATIVE STUDY OF DIMENSION REDUCTION TECHNIQUES FOR CONTENT-BASED IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    G. Sasikala

    2010-08-01

    Full Text Available Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content-based image retrieval is a promising approach because of its automatic indexing and retrieval based on their semantic features and visual appearance. This paper discusses the method for dimensionality reduction called Maximum Margin Projection (MMP. MMP aims at maximizing the margin between positive and negative sample at each neighborhood. It is designed for discovering the local manifold structure. Therefore, MMP is likely to be more suitable for image retrieval systems, where nearest neighbor search is usually involved. The performance of these approaches is measured by a user evaluation. It is found that the MMP based technique provides more functionalities and capabilities to support the features of information seeking behavior and produces better performance in searching images.

  19. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  20. Exploring access to scientific literature using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2007-03-01

    The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs. According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.

  1. System for accessing a collection of histology images using content-based strategies

    International Nuclear Information System (INIS)

    Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues; making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic community. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.co

  2. Content Based Image Retrieval using Novel Gaussian Fuzzy Feed Forward-Neural Network

    Directory of Open Access Journals (Sweden)

    C. R.B. Durai

    2011-01-01

    Full Text Available Problem statement: With extensive digitization of images, diagrams and paintings, traditional keyword based search has been found to be inefficient for retrieval of the required data. Content-Based Image Retrieval (CBIR system responds to image queries as input and relies on image content, using techniques from computer vision and image processing to interpret and understand it, while using techniques from information retrieval and databases to rapidly locate and retrieve images suiting an input query. CBIR finds extensive applications in the field of medicine as it assists a doctor to make better decisions by referring the CBIR system and gain confidence. Approach: Various methods have been proposed for CBIR using image low level image features like histogram, color layout, texture and analysis of the image in the frequency domain. Similarly various classification algorithms like Naïve Bayes classifier, Support Vector Machine, Decision tree induction algorithms and Neural Network based classifiers have been studied extensively. We proposed to extract features from an image using Discrete Cosine Transform, extract relevant features using information gain and Gaussian Fuzzy Feed Forward Neural Network algorithm for classification. Results and Conclusion: We apply our proposed procedure to 180 brain MRI images of which 72 images were used for testing and the remaining for training. The classification accuracy obtained was 95.83% for a three class problem. This research focused on a narrow search, where further investigation is needed to evaluate larger classes.

  3. SVM Based Navigation Patterns for Efficient Relevance Feedback in Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Vaishali Meshram

    2012-07-01

    Full Text Available Image retrieval based on image content has become a hot topic in the field of image processing and computer vision. Content-based image retrieval (CBIR is the basis of image retrieval systems. Unlike traditional database queries, content-based multimedia retrieval queries are imprecise in nature which makes it difficult for users to express their exact information need in the form of a precise right query. To be more profitable, relevance feedback techniques were incorporated into CBIR such that more precise results can be obtained by taking user’s feedbacks into account. However, existing relevance feedback based CBIR methods usually request a number of iterative feedbacks to produce refined search results, especially in a large-scale image database. This is impractical and inefficient in real applications. This paper studies about the research on ways to extend and improve query methods for image databases is widespread, we have developed the QBIC (Query by Image Content system to explore content-based retrieval methods. To achieve the high efficiency and effectiveness of CBIR we are using two type of methods for feature extraction like SVM (support vector machineand NPRF(navigation-pattern based relevance feedback. By using SVM classifier as a category predictor of query and database images, they are exploited at first to filter out irrelevant images by its different low-level, concept and key point-based features. Thus we may reduce the size of query search in the data base then we may apply NPRF algorithm and refinement strategies for further extraction. In terms of efficiency, the iterations of feedback are reduced by using the navigation patterns discovered from the user query log. In terms of effectiveness, the proposed search algorithm NPRF Search makes use of the discovered navigation patterns and three kinds of query refinement strategies, Query Point Movement (QPM, Query Reweighting (QR, and Query Expansion (QEX, to converge the

  4. A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Mohini. P. Sardey

    2015-08-01

    Full Text Available Basic group of visual techniques such as color, shape, texture are used in Content Based Image Retrievals (CBIR to retrieve query image or sub region of image to find similar images in image database. To improve query result, relevance feedback is used many times in CBIR to help user to express their preference and improve query results. In this paper, a new approach for image retrieval is proposed which is based on the features such as Color Histogram, Eigen Values and Match Point. Images from various types of database are first identified by using edge detection techniques .Once the image is identified, then the image is searched in the particular database, then all related images are displayed. This will save the retrieval time. Further to retrieve the precise query image, any of the three techniques are used and comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as compared with other two techniques.

  5. A Survey On: Content Based Image Retrieval Systems Using Clustering Techniques For Large Data sets

    Directory of Open Access Journals (Sweden)

    Monika Jain

    2011-12-01

    Full Text Available Content-based image retrieval (CBIR is a new but widely adopted method for finding images from vastand unannotated image databases. As the network and development of multimedia technologies arebecoming more popular, users are not satisfied with the traditional information retrieval techniques. Sonowadays the content based image retrieval (CBIR are becoming a source of exact and fast retrieval. Inrecent years, a variety of techniques have been developed to improve the performance of CBIR. Dataclustering is an unsupervised method for extraction hidden pattern from huge data sets. With large datasets, there is possibility of high dimensionality. Having both accuracy and efficiency for high dimensionaldata sets with enormous number of samples is a challenging arena. In this paper the clustering techniquesare discussed and analysed. Also, we propose a method HDK that uses more than one clustering techniqueto improve the performance of CBIR.This method makes use of hierachical and divide and conquer KMeansclustering technique with equivalency and compatible relation concepts to improve the performanceof the K-Means for using in high dimensional datasets. It also introduced the feature like color, texture andshape for accurate and effective retrieval system.

  6. Content Based Image Retrieval Approach Based on Top-Hat Transform And Modified Local Binary Patterns

    Directory of Open Access Journals (Sweden)

    Mohammad Saberi

    2012-12-01

    Full Text Available In this paper a robust approach is proposed for content based image retrieval (CBIR using texture analysis techniques. The proposed approach includes three main steps. In the first one, shape detection is done based on Top-Hat transform to detect and crop object part of the image. Second step is included a texture feature representation algorithm using color local binary patterns (CLBP and local variance features. Finally, to retrieve mostly closing matching images to the query, log likelihood ratio is used. The performance of the proposed approach is evaluated using Corel and Simplicity image sets and it compared by some of other well-known approaches in terms of precision and recall which shows the superiority of the proposed approach. Low noise sensitivity, rotation invariant, shift invariant, gray scale invariant and low computational complexity are some of other advantages.

  7. Feature Extraction with Ordered Mean Values for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

  8. Implementation and evaluation of a medical image management system with content-based retrieval support

    International Nuclear Information System (INIS)

    Objective: the present paper describes the implementation and evaluation of a medical images management system with content-based retrieval support (PACS-CBIR) integrating modules focused on images acquisition, storage and distribution, and text retrieval by keyword and images retrieval by similarity. Materials and methods: internet-compatible technologies were utilized for the system implementation with free ware, and C++, PHP and Java languages on a Linux platform. There is a DICOM-compatible image management module and two query modules, one of them based on text and the other on similarity of image texture attributes. Results: results demonstrate an appropriate images management and storage, and that the images retrieval time, always < 15 sec, was found to be good by users. The evaluation of retrieval by similarity has demonstrated that the selected images extractor allowed the sorting of images according to anatomical areas. Conclusion: based on these results, one can conclude that the PACS-CBIR implementation is feasible. The system has demonstrated to be DICOM-compatible, and that it can be integrated with the local information system. The similar images retrieval functionality can be enhanced by the introduction of further descriptors. (author)

  9. Content-Based Image Retrieval System for Optical Fiber Sensor Information Processing

    Directory of Open Access Journals (Sweden)

    Madhusudhan S., Channakeshava K.R., Dr.T.Rangaswamy

    2014-06-01

    Full Text Available Fiber reinforced polymer (FRP materials are finding new application areas every day. Monitoring of FRP materials is essential to make the structure fail-safe. Researchers have developed many methods for structural health monitoring (SHM of FRP structures by using optical fiber sensors. The interrogation system used for processing optical fiber sensor information in these SHMs is very complex and expensive. In this regard, a unique interrogation method has been emphasized in this paper. Proposed work involves in developing the interrogation system, with the aid of content-based image retrieval (CBIR using MATLAB.

  10. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  11. A Comparative Study of Dimension Reduction Techniques for Content-Based Image Retrivel

    Directory of Open Access Journals (Sweden)

    G. Sasikala

    2010-09-01

    Full Text Available Efficient and effective retrieval techniques of images are desired because of the explosive growth of digitalimages. Content-based image retrieval is a promising approach because of its automatic indexing andretrieval based on their semantic features and visual appearance. This paper discusses the method fordimensionality reduction called Maximum Margin Projection (MMP. MMP aims at maximizing themargin between positive and negative sample at each neighborhood. It is designed for discovering the localmanifold structure. Therefore, MMP is likely to be more suitable for image retrieval systems, where nearestneighbor search is usually involved. The performance of these approaches is measured by a userevaluation. It is found that the MMP based technique provides more functionalities and capabilities tosupport the features of information seeking behavior and produces better performance in searchingimages.

  12. An Extended Image Hashing Concept: Content-Based Fingerprinting Using FJLT

    Directory of Open Access Journals (Sweden)

    Xudong Lv

    2009-01-01

    Full Text Available Dimension reduction techniques, such as singular value decomposition (SVD and nonnegative matrix factorization (NMF, have been successfully applied in image hashing by retaining the essential features of the original image matrix. However, a concern of great importance in image hashing is that no single solution is optimal and robust against all types of attacks. The contribution of this paper is threefold. First, we introduce a recently proposed dimension reduction technique, referred as Fast Johnson-Lindenstrauss Transform (FJLT, and propose the use of FJLT for image hashing. FJLT shares the low distortion characteristics of a random projection, but requires much lower computational complexity. Secondly, we incorporate Fourier-Mellin transform into FJLT hashing to improve its performance under rotation attacks. Thirdly, we propose a new concept, namely, content-based fingerprint, as an extension of image hashing by combining different hashes. Such a combined approach is capable of tackling all types of attacks and thus can yield a better overall performance in multimedia identification. To demonstrate the superior performance of the proposed schemes, receiver operating characteristics analysis over a large image database and a large class of distortions is performed and compared with the state-of-the-art image hashing using NMF.

  13. Prospective Study for Semantic Inter-Media Fusion in Content-Based Medical Image Retrieval

    CERN Document Server

    Teodorescu, Roxana; Leow, Wee-Kheng; Cretu, Vladimir

    2008-01-01

    One important challenge in modern Content-Based Medical Image Retrieval (CBMIR) approaches is represented by the semantic gap, related to the complexity of the medical knowledge. Among the methods that are able to close this gap in CBMIR, the use of medical thesauri/ontologies has interesting perspectives due to the possibility of accessing on-line updated relevant webservices and to extract real-time medical semantic structured information. The CBMIR approach proposed in this paper uses the Unified Medical Language System's (UMLS) Metathesaurus to perform a semantic indexing and fusion of medical media. This fusion operates before the query processing (retrieval) and works at an UMLS-compliant conceptual indexing level. Our purpose is to study various techniques related to semantic data alignment, preprocessing, fusion, clustering and retrieval, by evaluating the various techniques and highlighting future research directions. The alignment and the preprocessing are based on partial text/image retrieval feedb...

  14. Integration of Color and Local Derivative Pattern Features for Content-Based Image Indexing and Retrieval

    Science.gov (United States)

    Vipparthi, Santosh Kumar; Nagar, Shyam Krishna

    2015-09-01

    This paper presents two new feature descriptors for content based image retrieval (CBIR) application. The proposed two descriptors are named as color local derivative patterns (CLDP) and inter color local derivative pattern (ICLDP). In order to reduce the computational complexity the uniform patterns are applied to both CLDP and ICLDP. Further, uniform CLDP (CLDPu2) and uniform ICLDP (ICLDPu2) are generated respectively. The proposed descriptors are able to exploit individual (R, G and B) spectral channel information and co-relating pair (RG, GB, BR, etc.) of spectral channel information. The retrieval performances of the proposed descriptors (CLDP and ICLDP) are tested by conducting two experiments on Corel-5000 and Corel-10000 benchmark databases. The results after investigation show a significant improvement in terms of precision, average retrieval precision (ARP), recall and average retrieval rate (ARR) as compared to local binary patterns (LBP), local derivative patterns (LDP) and other state-of-the-art techniques for image retrieval.

  15. A web-accessible content-based cervicographic image retrieval system

    Science.gov (United States)

    Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.

    2008-03-01

    Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.

  16. Content Based Image Retrieval of Ultrasound Liver Diseases Based on Hybrid Approach

    Directory of Open Access Journals (Sweden)

    R. Suganya

    2012-01-01

    Full Text Available Problem statement: In the past few years, immense improvement was obtained in the field of Content-Based Image Retrieval (CBIR. Nevertheless, existing systems still fail when applied to medical image databases. Simple feature-extraction algorithms that operate on the entire image for characterization of color, texture, or shape cannot be related to the descriptive semantics of medical knowledge that is extracted from images by human experts. Approach: In this study, we present a hybrid approach called Support vector machine combined with relevance feedback for the retrieval of liver diseases from Ultrasound (US images is introduced. SVM and RF are supervised active learning technique used to improve the effectiveness of the retrieval system. Three kinds of liver diseases are identified including cyst, alcoholic cirrhosis and carcinoma. The diagnosis scheme includes four steps: image registration, feature extraction, feature selection and image retrieval. First the ultrasound images are registered in the database based on the modality. Then the features, derived from first order statistics, gray level co-occurrence matrix and fractal geometry, are obtained from the Pathology Bearing Regions (PBRs among the normal and abnormal ultrasound images. The Correlation Based Feature Selection (CFS algorithm selects the certain features for the specific diseases and also reduces dimensionality space for classification. Finally, we implement our hybrid approach for retrieval of specific diseases from the database. Results: This hybrid approach can get the query from user and has retrieved both positive and negative samples from the database, by getting feedback in each round from the radiologist is help to improve the retrieval of correct images. Conclusion: The hybrid approach (SVM+RF comprises several benefits when compared to existing CBIR for medical system by neural network algorithms. Fractal geometry in feature extraction plays crucial role in

  17. Content-Based Image Retrieval by Metric Learning From Radiology Reports: Application to Interstitial Lung Diseases.

    Science.gov (United States)

    Ramos, José; Kockelkorn, Thessa T J P; Ramos, Isabel; Ramos, Rui; Grutters, Jan; Viergever, Max A; van Ginneken, Bram; Campilho, Aurélio

    2016-01-01

    Content-based image retrieval (CBIR) is a search technology that could aid medical diagnosis by retrieving and presenting earlier reported cases that are related to the one being diagnosed. To retrieve relevant cases, CBIR systems depend on supervised learning to map low-level image contents to high-level diagnostic concepts. However, the annotation by medical doctors for training and evaluation purposes is a difficult and time-consuming task, which restricts the supervised learning phase to specific CBIR problems of well-defined clinical applications. This paper proposes a new technique that automatically learns the similarity between the several exams from textual distances extracted from radiology reports, thereby successfully reducing the number of annotations needed. Our method first infers the relation between patients by using information retrieval techniques to determine the textual distances between patient radiology reports. These distances are subsequently used to supervise a metric learning algorithm, that transforms the image space accordingly to textual distances. CBIR systems with different image descriptions and different levels of medical annotations were evaluated, with and without supervision from textual distances, using a database of computer tomography scans of patients with interstitial lung diseases. The proposed method consistently improves CBIR mean average precision, with improvements that can reach 38%, and more marked gains for small annotation sets. Given the overall availability of radiology reports in picture archiving and communication systems, the proposed approach can be broadly applied to CBIR systems in different medical problems, and may facilitate the introduction of CBIR in clinical practice. PMID:25438332

  18. Content-based image retrieval in radiology: analysis of variability in human perception of similarity.

    Science.gov (United States)

    Faruque, Jessica; Beaulieu, Christopher F; Rosenberg, Jarrett; Rubin, Daniel L; Yao, Dorcas; Napel, Sandy

    2015-04-01

    We aim to develop a better understanding of perception of similarity in focal computed tomography (CT) liver images to determine the feasibility of techniques for developing reference sets for training and validating content-based image retrieval systems. In an observer study, four radiologists and six nonradiologists assessed overall similarity and similarity in 5 image features in 136 pairs of focal CT liver lesions. We computed intra- and inter-reader agreements in these similarity ratings and viewed the distributions of the ratings. The readers' ratings of overall similarity and similarity in each feature primarily appeared to be bimodally distributed. Median Kappa scores for intra-reader agreement ranged from 0.57 to 0.86 in the five features and from 0.72 to 0.82 for overall similarity. Median Kappa scores for inter-reader agreement ranged from 0.24 to 0.58 in the five features and were 0.39 for overall similarity. There was no significant difference in agreement for radiologists and nonradiologists. Our results show that developing perceptual similarity reference standards is a complex task. Moderate to high inter-reader variability precludes ease of dividing up the workload of rating perceptual similarity among many readers, while low intra-reader variability may make it possible to acquire large volumes of data by asking readers to view image pairs over many sessions. PMID:26158112

  19. Three-dimensional Content-Based Cardiac Image Retrieval using global and local descriptors.

    Science.gov (United States)

    Bergamasco, Leila C C; Nunes, Fátima L S

    2015-01-01

    The increase in volume of medical images generated and stored has created difficulties in accurate image retrieval. An alternative is to generate three-dimensional (3D) models from such medical images and use them in the search. Some of the main cardiac illnesses, such as Congestive Heart Failure (CHF), have deformation in the heart's shape as one of the main symptoms, which can be identified faster in a 3D object than in slices. This article presents techniques developed to retrieve 3D cardiac models using global and local descriptors within a content-based image retrieval system. These techniques were applied in pre-classified 3D models with and without the CHF disease and they were evaluated by using Precision vs. Recall metric. We observed that local descriptors achieved better results than a global descriptor, reaching 85% of accuracy. The results confirmed the potential of using 3D models retrieval in the medical context to aid in the diagnosis. PMID:26958280

  20. Endowing a Content-Based Medical Image Retrieval System with Perceptual Similarity Using Ensemble Strategy.

    Science.gov (United States)

    Bedo, Marcos Vinicius Naves; Pereira Dos Santos, Davi; Ponciano-Silva, Marcelo; de Azevedo-Marques, Paulo Mazzoncini; Ferreira de Carvalho, André Ponce de León; Traina, Caetano

    2016-02-01

    Content-based medical image retrieval (CBMIR) is a powerful resource to improve differential computer-aided diagnosis. The major problem with CBMIR applications is the semantic gap, a situation in which the system does not follow the users' sense of similarity. This gap can be bridged by the adequate modeling of similarity queries, which ultimately depends on the combination of feature extractor methods and distance functions. In this study, such combinations are referred to as perceptual parameters, as they impact on how images are compared. In a CBMIR, the perceptual parameters must be manually set by the users, which imposes a heavy burden on the specialists; otherwise, the system will follow a predefined sense of similarity. This paper presents a novel approach to endow a CBMIR with a proper sense of similarity, in which the system defines the perceptual parameter depending on the query element. The method employs ensemble strategy, where an extreme learning machine acts as a meta-learner and identifies the most suitable perceptual parameter according to a given query image. This parameter defines the search space for the similarity query that retrieves the most similar images. An instance-based learning classifier labels the query image following the query result set. As the concept implementation, we integrated the approach into a mammogram CBMIR. For each query image, the resulting tool provided a complete second opinion, including lesion class, system certainty degree, and set of most similar images. Extensive experiments on a large mammogram dataset showed that our proposal achieved a hit ratio up to 10% higher than the traditional CBMIR approach without requiring external parameters from the users. Our database-driven solution was also up to 25% faster than content retrieval traditional approaches. PMID:26259520

  1. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces

    Directory of Open Access Journals (Sweden)

    Akshay Sridhar

    2015-01-01

    Full Text Available Context : Content-based image retrieval (CBIR systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. Aims : In this paper we present boosted spectral embedding (BoSE, which utilizes a boosted distance metric to selectively weight individual features (based on training data to subsequently map the data into a reduced-dimensional space. Settings and Design : BoSE is evaluated against spectral embedding (SE (which employs equal feature weighting in the context of CBIR of digitized prostate and breast cancer histopathology images. Materials and Methods : The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1 Prostate cancer histopathology (benign vs. malignant, (2 estrogen receptor (ER + breast cancer histopathology (low vs. high grade, and (3 HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration. Statistical Analysis Used : We plotted and calculated the area under precision-recall curves (AUPRC and calculated classification accuracy using the Random Forest classifier. Results : BoSE outperformed SE both

  2. Color Histogram and DBC Co-Occurrence Matrix for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    K. Prasanthi Jasmine

    2014-12-01

    Full Text Available This paper presents the integration of color histogram and DBC co-occurrence matrix for content based image retrieval. The exit DBC collect the directional edges which are calculated by applying the first-order derivatives in 0º, 45º, 90º and 135º directions. The feature vector length of DBC for a particular direction is 512 which are more for image retrieval. To avoid this problem, we collect the directional edges by excluding the center pixel and further applied the rotation invariant property. Further, we calculated the co-occurrence matrix to form the feature vector. Finally, the HSV color histogram and the DBC co-occurrence matrix are integrated to form the feature database. The retrieval results of the proposed method have been tested by conducting three experiments on Brodatz, MIT VisTex texture databases and Corel-1000 natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP, DBC and other transform domain features.

  3. Computer-aided diagnostics of screening mammography using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo

    2012-03-01

    Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.

  4. Indexing of Content-Based Image Retrieval System with Image Understanding Approach

    Institute of Scientific and Technical Information of China (English)

    李学龙; 刘政凯; 俞能海

    2003-01-01

    This paper presents a novel efficient semantic image classification algorithm for high-level feature indexing of high-dimension image database. Experiments show that the algorithm performs well. The size of the train set and the test set is 7 537 and 5 000 respectively. Based on this theory, another ground is built with 12,000 images, which are divided into three classes: city, landscape and person, the total result of the classifications is 88.92%, meanwhile, some preliminary results are presented for image understanding based on semantic image classification and low level features. The groundtruth for the experiments is built with the images from Corel database, photos and some famous face databases.

  5. AN EFFICIENT CONTENT BASED IMAGE RETRIEVAL USING COLOR AND TEXTURE OF IMAGE SUBBLOCKS

    Directory of Open Access Journals (Sweden)

    CH.KAVITHA,

    2011-02-01

    Full Text Available Image retrieval is an active research area in image processing, pattern recognition, and computer vision. For the purpose of effectively retrieving more similar images from the digital image databases, this paper uses the local HSV color and Gray level co-occurrence matrix (GLCM texture features. The image is divided into sub blocks of equal size. Then the color and texture features of each sub-block are computed. Color of each sub-block is extracted by quantifying the HSV color space into non-equal intervals and the color feature is represented by cumulative color histogram. Texture of each sub-block is obtained by using gray level co-occurrence matrix. An integrated matching scheme based on Most Similar Highest Priority (MSHP principle is used to compare the query and target image. The adjacency matrix of a bipartite graph is formed using the sub-blocks of query and target image. This matrix is used for matching the images. Euclidean distance measure is used in retrieving the similar images. As the experimental results indicated, the proposed technique indeed outperforms other retrieval schemes interms of average precision.

  6. Regional content-based image retrieval for solar images: Traditional versus modern methods

    Science.gov (United States)

    Banda, J. M.; Angryk, R. A.

    2015-11-01

    This work presents an extensive evaluation between conventional (distance-based) and modern (search-engine) information retrieval techniques in the context of finding similar Solar image regions within the Solar Dynamics Observatory (SDO) mission image repository. We compare pre-computed image descriptors (image features) extracted from the SDO mission images in two very different ways: (1) similarity retrieval using multiple distance-based metrics and (2) retrieval using Lucene, a general purpose scalable retrieval engine. By transforming image descriptors into histogram-like signatures and into Lucene-compatible text strings, we are able to effectively evaluate the retrieval capabilities of both methodologies. Using the image descriptors alongside a labeled image dataset, we present an extensive evaluation under the criteria of performance, scalability and retrieval precision of experimental retrieval systems in order to determine which implementation would be ideal for a production level system. In our analysis we performed key transformations to our sample datasets to properly evaluate rotation invariance and scalability. At the end of this work we conclude which technique is the most robust and would yield the best performing system after an extensive experimental evaluation, we also point out the strengths and weaknesses of each approach and theorize on potential improvements.

  7. Content-based image retrieval from a database of fracture images

    Science.gov (United States)

    Müller, Henning; Do Hoang, Phuong Anh; Depeursinge, Adrien; Hoffmeyer, Pierre; Stern, Richard; Lovis, Christian; Geissbuhler, Antoine

    2007-03-01

    This article describes the use of a medical image retrieval system on a database of 16'000 fractures, selected from surgical routine over several years. Image retrieval has been a very active domain of research for several years. It was frequently proposed for the medical domain, but only few running systems were ever tested in clinical routine. For the planning of surgical interventions after fractures, x-ray images play an important role. The fractures are classified according to exact fracture location, plus whether and to which degree the fracture is damaging articulations to see how complicated a reparation will be. Several classification systems for fractures exist and the classification plus the experience of the surgeon lead in the end to the choice of surgical technique (screw, metal plate, ...). This choice is strongly influenced by the experience and knowledge of the surgeons with respect to a certain technique. Goal of this article is to describe a prototype that supplies similar cases to an example to help treatment planning and find the most appropriate technique for a surgical intervention. Our database contains over 16'000 fracture images before and after a surgical intervention. We use an image retrieval system (GNU Image Finding Tool, GIFT) to find cases/images similar to an example case currently under observation. Problems encountered are varying illumination of images as well as strong anatomic differences between patients. Regions of interest are usually small and the retrieval system needs to focus on this region. Results show that GIFT is capable of supplying similar cases, particularly when using relevance feedback, on such a large database. Usual image retrieval is based on a single image as search target but for this application we have to select images by case as similar cases need to be found and not images. A few false positive cases often remain in the results but they can be sorted out quickly by the surgeons. Image retrieval can

  8. Content-based image retrieval utilizing explicit shape descriptors: applications to breast MRI and prostate histopathology

    Science.gov (United States)

    Sparks, Rachel; Madabhushi, Anant

    2011-03-01

    Content-based image retrieval (CBIR) systems, in the context of medical image analysis, allow for a user to compare a query image to previously archived database images in terms of diagnostic and/or prognostic similarity. CBIR systems can therefore serve as a powerful computerized decision support tool for clinical diagnostics and also serve as a useful learning tool for medical students, residents, and fellows. An accurate CBIR system relies on two components, (1) image descriptors which are related to a previously defined notion of image similarity and (2) quantification of image descriptors in order to accurately characterize and capture the a priori defined image similarity measure. In many medical applications, the morphology of an object of interest (e.g. breast lesions on DCE-MRI or glands on prostate histopathology) may provide important diagnostic and prognostic information regarding the disease being investigated. Morphological attributes can be broadly categorized as being (a) model-based (MBD) or (b) non-model based (NMBD). Most computerized decision support tools leverage morphological descriptors (e.g. area, contour variation, and compactness) which belong to the latter category in that they do not explicitly model morphology for the object of interest. Conversely, descriptors such as Fourier descriptors (FDs) explicitly model the object of interest. In this paper, we present a CBIR system that leverages a novel set of MBD called Explicit Shape Descriptors (ESDs) which accurately describe the similarity between the morphology of objects of interest. ESDs are computed by: (a) fitting shape models to objects of interest, (b) pairwise comparison between shape models, and (c) a nonlinear dimensionality reduction scheme to extract a concise set of morphological descriptors in a reduced dimensional embedding space. We utilized our ESDs in the context of CBIR in three datasets: (1) the synthetic MPEG-7 Set B containing 1400 silhouette images, (2) DCE-MRI of

  9. Optimization of reference library used in content-based medical image retrieval scheme

    International Nuclear Information System (INIS)

    Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (Az) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from Az=0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to Az=0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance

  10. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    Science.gov (United States)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  11. Feature-Based Adaptive Tolerance Tree (FATT): An Efficient Indexing Technique for Content-Based Image Retrieval Using Wavelet Transform

    OpenAIRE

    AnandhaKumar, Dr. P.; V. Balamurugan

    2010-01-01

    This paper introduces a novel indexing and access method, called Feature- Based Adaptive Tolerance Tree (FATT), using wavelet transform is proposed to organize large image data sets efficiently and to support popular image access mechanisms like Content Based Image Retrieval (CBIR).Conventional database systems are designed for managing textual and numerical data and retrieving such data is often based on simple comparisons of text or numerical values. However, this method is no longer adequa...

  12. Automated Orientation of Aerial Images

    DEFF Research Database (Denmark)

    Høhle, Joachim

    2002-01-01

    Methods for automated orientation of aerial images are presented. They are based on the use of templates, which are derived from existing databases, and area-based matching. The characteristics of available database information and the accuracy requirements for map compilation and orthoimage...... production are discussed on the example of Denmark. Details on the developed methods for interior and exterior orientation are described. Practical examples like the measurement of réseau images, updating of topographic databases and renewal of orthoimages are used to prove the feasibility of the developed...

  13. Towards case-based medical learning in radiological decision making using content-based image retrieval

    Directory of Open Access Journals (Sweden)

    Günther Rolf W

    2011-10-01

    Full Text Available Abstract Background Radiologists' training is based on intensive practice and can be improved with the use of diagnostic training systems. However, existing systems typically require laboriously prepared training cases and lack integration into the clinical environment with a proper learning scenario. Consequently, diagnostic training systems advancing decision-making skills are not well established in radiological education. Methods We investigated didactic concepts and appraised methods appropriate to the radiology domain, as follows: (i Adult learning theories stress the importance of work-related practice gained in a team of problem-solvers; (ii Case-based reasoning (CBR parallels the human problem-solving process; (iii Content-based image retrieval (CBIR can be useful for computer-aided diagnosis (CAD. To overcome the known drawbacks of existing learning systems, we developed the concept of image-based case retrieval for radiological education (IBCR-RE. The IBCR-RE diagnostic training is embedded into a didactic framework based on the Seven Jump approach, which is well established in problem-based learning (PBL. In order to provide a learning environment that is as similar as possible to radiological practice, we have analysed the radiological workflow and environment. Results We mapped the IBCR-RE diagnostic training approach into the Image Retrieval in Medical Applications (IRMA framework, resulting in the proposed concept of the IRMAdiag training application. IRMAdiag makes use of the modular structure of IRMA and comprises (i the IRMA core, i.e., the IRMA CBIR engine; and (ii the IRMAcon viewer. We propose embedding IRMAdiag into hospital information technology (IT infrastructure using the standard protocols Digital Imaging and Communications in Medicine (DICOM and Health Level Seven (HL7. Furthermore, we present a case description and a scheme of planned evaluations to comprehensively assess the system. Conclusions The IBCR-RE paradigm

  14. Feature-Based Adaptive Tolerance Tree (FATT): An Efficient Indexing Technique for Content-Based Image Retrieval Using Wavelet Transform

    CERN Document Server

    AnandhaKumar, Dr P

    2010-01-01

    This paper introduces a novel indexing and access method, called Feature- Based Adaptive Tolerance Tree (FATT), using wavelet transform is proposed to organize large image data sets efficiently and to support popular image access mechanisms like Content Based Image Retrieval (CBIR).Conventional database systems are designed for managing textual and numerical data and retrieving such data is often based on simple comparisons of text or numerical values. However, this method is no longer adequate for images, since the digital presentation of images does not convey the reality of images. Retrieval of images become difficult when the database is very large. This paper addresses such problems and presents a novel indexing technique, Feature Based Adaptive Tolerance Tree (FATT), which is designed to bring an effective solution especially for indexing large databases. The proposed indexing scheme is then used along with a query by image content, in order to achieve the ultimate goal from the user point of view that ...

  15. Project SEMACODE : a scale-invariant object recognition system for content-based queries in image databases

    OpenAIRE

    Brause, Rüdiger W.; Arlt, Björn; Tratar, Erwin

    1999-01-01

    For the efficient management of large image databases, the automated characterization of images and the usage of that characterization for searching and ordering tasks is highly desirable. The purpose of the project SEMACODE is to combine the still unsolved problem of content-oriented characterization of images with scale-invariant object recognition and modelbased compression methods. To achieve this goal, existing techniques as well as new concepts related to pattern matching, image encodin...

  16. The concept of content-based visual image retrieval system in the experimental medical database

    OpenAIRE

    Jiri Polivka

    2009-01-01

    The Content-based visual information retrieval (CBVIR) has been one of the most growing research area over the last few years. The reason is steadily increasing amount of multimedia, especially visual data in wide range of nowadays professional activities. The CBVIR technics offer intelligent similarity data search, which can be used if the comparison among many cases are needed. In medical applications the recent diagnostic and therapy procedures usually involve work with the latest technica...

  17. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Directory of Open Access Journals (Sweden)

    Meiyan Huang

    Full Text Available This study aims to develop content-based image retrieval (CBIR system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor. Using the BoVW model with partition learning, the mean average precision (mAP of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  18. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Science.gov (United States)

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images. PMID:25028970

  19. Progressive content-based retrieval of image and video with adaptive and iterative refinement

    Science.gov (United States)

    Li, Chung-Sheng (Inventor); Turek, John Joseph Edward (Inventor); Castelli, Vittorio (Inventor); Chen, Ming-Syan (Inventor)

    1998-01-01

    A method and apparatus for minimizing the time required to obtain results for a content based query in a data base. More specifically, with this invention, the data base is partitioned into a plurality of groups. Then, a schedule or sequence of groups is assigned to each of the operations of the query, where the schedule represents the order in which an operation of the query will be applied to the groups in the schedule. Each schedule is arranged so that each application of the operation operates on the group which will yield intermediate results that are closest to final results.

  20. Content-based histopathology image retrieval using a kernel-based semantic annotation framework.

    Science.gov (United States)

    Caicedo, Juan C; González, Fabio A; Romero, Eduardo

    2011-08-01

    Large amounts of histology images are captured and archived in pathology departments due to the ever expanding use of digital microscopy. The ability to manage and access these collections of digital images is regarded as a key component of next generation medical imaging systems. This paper addresses the problem of retrieving histopathology images from a large collection using an example image as query. The proposed approach automatically annotates the images in the collection, as well as the query images, with high-level semantic concepts. This semantic representation delivers an improved retrieval performance providing more meaningful results. We model the problem of automatic image annotation using kernel methods, resulting in a unified framework that includes: (1) multiple features for image representation, (2) a feature integration and selection mechanism (3) and an automatic semantic image annotation strategy. An extensive experimental evaluation demonstrated the effectiveness of the proposed framework to build meaningful image representations for learning and useful semantic annotations for image retrieval. PMID:21296682

  1. A Review of Content Based Image Classification using Machine Learning Approach

    OpenAIRE

    Sandeep Kumar; Zeeshan Khan; Anurag jain

    2012-01-01

    Image classification is vital field of research in computer vision. Increasing rate of multimedia data, remote sensing and web photo gallery need a category of different image for the proper retrieval of user. Various researchers apply different approach for image classification such as segmentation, clustering and some machine learning approach for the classification of image. Content of image such as color, texture and shape and size plays an important role in semantic image classification....

  2. Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data

    OpenAIRE

    Kumar, Ashnil; Kim, Jinman; Cai, Weidong; Fulham, Michael; Feng, Dagan

    2013-01-01

    Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a re...

  3. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    Science.gov (United States)

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme. PMID:24494177

  4. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    Science.gov (United States)

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively. PMID:21079279

  5. Adaptive image content-based exposure control for scanning applications in radiography

    NARCIS (Netherlands)

    H. Schulerud; J. Thielemann; T. Kirkhus; K. Kaspersen; J.M. Østby; M.G. Metaxas; G.J. Royle; J. Griffiths; E. Cook; C. Esbrand; S. Pani; C. Venanzi; P.F. van der Stelt; G. Li; R. Turchetta; A. Fant; S. Theodoridis; H. Georgiou; G. Hall; M. Noy; J. Jones; J. Leaver; F. Triantis; A. Asimidis; N. Manthos; R. Longo; A. Bergamaschi; R.D. Speller

    2007-01-01

    I-ImaS (Intelligent Imaging Sensors) is a European project which has designed and developed a new adaptive X-ray imaging system using on-line exposure control, to create locally optimized images. The I-ImaS system allows for real-time image analysis during acquisition, thus enabling real-time exposu

  6. A Visual Analytics Approach Using the Exploration of Multidimensional Feature Spaces for Content-Based Medical Image Retrieval.

    Science.gov (United States)

    Kumar, Ashnil; Nette, Falk; Klein, Karsten; Fulham, Michael; Kim, Jinman

    2015-09-01

    Content-based image retrieval (CBIR) is a search technique based on the similarity of visual features and has demonstrated potential benefits for medical diagnosis, education, and research. However, clinical adoption of CBIR is partially hindered by the difference between the computed image similarity and the user's search intent, the semantic gap, with the end result that relevant images with outlier features may not be retrieved. Furthermore, most CBIR algorithms do not provide intuitive explanations as to why the retrieved images were considered similar to the query (e.g., which subset of features were similar), hence, it is difficult for users to verify if relevant images, with a small subset of outlier features, were missed. Users, therefore, resort to examining irrelevant images and there are limited opportunities to discover these "missed" images. In this paper, we propose a new approach to medical CBIR by enabling a guided visual exploration of the search space through a tool, called visual analytics for medical image retrieval (VAMIR). The visual analytics approach facilitates interactive exploration of the entire dataset using the query image as a point-of-reference. We conducted a user study and several case studies to demonstrate the capabilities of VAMIR in the retrieval of computed tomography images and multimodality positron emission tomography and computed tomography images. PMID:25296409

  7. Content-based image retrieval using a signature graph and a self-organizing map

    Directory of Open Access Journals (Sweden)

    Van Thanh The

    2016-06-01

    Full Text Available In order to effectively retrieve a large database of images, a method of creating an image retrieval system CBIR (contentbased image retrieval is applied based on a binary index which aims to describe features of an image object of interest. This index is called the binary signature and builds input data for the problem of matching similar images. To extract the object of interest, we propose an image segmentation method on the basis of low-level visual features including the color and texture of the image. These features are extracted at each block of the image by the discrete wavelet frame transform and the appropriate color space. On the basis of a segmented image, we create a binary signature to describe the location, color and shape of the objects of interest. In order to match similar images, we provide a similarity measure between the images based on binary signatures. Then, we present a CBIR model which combines a signature graph and a self-organizing map to cluster and store similar images. To illustrate the proposed method, experiments on image databases are reported, including COREL,Wang and MSRDI.

  8. Integrating Color and Spatial Feature for Content-Based Image Retrieval

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper, we present a novel and efficient scheme for extracting, indexing and retrieving color images. Our motivation was to reduce the space overhead of partition-based approaches taking advantage of the fact that only a relatively low number of distinct values of a particular visual feature is present in most images. To extract color feature and build indices into our image database we take into consideration factors such as human color perception and perceptual range, and the image is partitioned into a set of regions by using a simple classifying scheme. The compact color feature vector and the spatial color histogram, which are extracted from the seqmented image region, are used for representing the color and spatial information in the image. We have also developed the region-based distance measures to compare the similarity of two images. Extensive tests on a large image collection were conducted to demonstrate the effectiveness of the proposed approach.

  9. Application of the fuzzy logic in content-based image retrieval

    OpenAIRE

    Xiaoling, Wang; Kanglin, Xie

    2005-01-01

    This paper imports the fuzzy logic into image retrieval to deal with the vagueness and ambiguity of human judgment of image similarity. Our retrieval system has the following properties: firstly adopting the fuzzy language variables to describe the similarity degree of image features, not the features themselves; secondly making use of the fuzzy inference to instruct the weights assignment among various image features; thirdly expressing the subjectivity of human perceptions by fuzzy rules im...

  10. An automated imaging system for radiation biodosimetry.

    Science.gov (United States)

    Garty, Guy; Bigelow, Alan W; Repin, Mikhail; Turner, Helen C; Bian, Dakai; Balajee, Adayabalam S; Lyulko, Oleksandra V; Taveras, Maria; Yao, Y Lawrence; Brenner, David J

    2015-07-01

    We describe here an automated imaging system developed at the Center for High Throughput Minimally Invasive Radiation Biodosimetry. The imaging system is built around a fast, sensitive sCMOS camera and rapid switchable LED light source. It features complete automation of all the steps of the imaging process and contains built-in feedback loops to ensure proper operation. The imaging system is intended as a back end to the RABiT-a robotic platform for radiation biodosimetry. It is intended to automate image acquisition and analysis for four biodosimetry assays for which we have developed automated protocols: The Cytokinesis Blocked Micronucleus assay, the γ-H2AX assay, the Dicentric assay (using PNA or FISH probes) and the RABiT-BAND assay. PMID:25939519

  11. Latent Semantic Analysis as a Method of Content-Based Image Retrieval in Medical Applications

    Science.gov (United States)

    Makovoz, Gennadiy

    2010-01-01

    The research investigated whether a Latent Semantic Analysis (LSA)-based approach to image retrieval can map pixel intensity into a smaller concept space with good accuracy and reasonable computational cost. From a large set of M computed tomography (CT) images, a retrieval query found all images for a particular patient based on semantic…

  12. An Interactive Content Based Image Retrieval Technique and Evaluation of its Performance in High Dimensional and Low Dimensional Space

    Directory of Open Access Journals (Sweden)

    Nirmalya Chowdhury

    2010-09-01

    Full Text Available In this paper we have developed an Interactive Content Based Image Retrieval System which aims at selecting the most informative images with respect to the query image by ranking the retrieved images. The system uses relevance feedback to iteratively train the Histogram Intersection Kernel Based Support Vector Machine Classifier. At the end of the training phase of the classifier, the relevant set of images given by the final iteration of the relevance feedback is collected. In the retrieval phase, a ranking of the images in this relevant set is done on the basis of their Histogram Intersection based similarity measure with query image. We improved the method further by reducing dimensions of the feature vector of the images using Principle Component Analysis along with rejecting the zero components which are caused by sparseness of the pixels in the color bins of the histograms. The experiments have been done on a 6 category database created whose sample images are given in this paper. The dimensionality of the feature vectors of the images was initially 72. After feature reduction process, it becomes 59. The dimensionality reduction makes the system more robust and computationally efficient. The experimental results also agree with this fact.

  13. High Frequency Content based Stimulus for Perceptual Sharpness Assessment in Natural Images

    OpenAIRE

    Saha, Ashirbani; Wu, Q. M. Jonathan

    2014-01-01

    A blind approach to evaluate the perceptual sharpness present in a natural image is proposed. Though the literature demonstrates a set of variegated visual cues to detect or evaluate the absence or presence of sharpness, we emphasize in the current work that high frequency content and local standard deviation can form strong features to compute perceived sharpness in any natural image, and can be considered an able alternative for the existing cues. Unsharp areas in a natural image happen to ...

  14. Natural Language Processing Versus Content-Based Image Analysis for Medical Document Retrieval

    OpenAIRE

    Névéol, Aurélie; Deserno, Thomas M.; Darmoni, Stéfan J.; Güld, Mark Oliver; Aronson, Alan R.

    2008-01-01

    One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image componen...

  15. New approach using Bayesian Network to improve content based image classification systems

    CERN Document Server

    jayech, Khlifia

    2012-01-01

    This paper proposes a new approach based on augmented naive Bayes for image classification. Initially, each image is cutting in a whole of blocks. For each block, we compute a vector of descriptors. Then, we propose to carry out a classification of the vectors of descriptors to build a vector of labels for each image. Finally, we propose three variants of Bayesian Networks such as Naive Bayesian Network (NB), Tree Augmented Naive Bayes (TAN) and Forest Augmented Naive Bayes (FAN) to classify the image using the vector of labels. The results showed a marked improvement over the FAN, NB and TAN.

  16. Phase-unwrapping algorithm for images with high noise content based on a local histogram

    Science.gov (United States)

    Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe

    2005-03-01

    We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.

  17. Result Analysis on Content Base Image Retrieval using Combination of Color, Shape and Texture Features

    Directory of Open Access Journals (Sweden)

    Neha Jain, Sumit Sharma, Ravi Mohan Sairam

    2012-12-01

    Full Text Available Image retrieval based on color, texture and shape is an emerging and wide area of research scope. In this paper we present a novel framework for combining all the three i.e. color, texture and shape information, and achieve higher retrieval efficiency using dominant color feature. The image and its complement are partitioned into non-overlapping tiles of equal size. The features drawn from conditional co-occurrence histograms between the image tiles and corresponding complement tiles, in RGB color space, serve as local descriptors of color, shape and texture. We apply the integration of the above combination, then we cluster based on alike properties. Based on five dominant colors we retrieve the similar images. We also create the histogram of edges. Image information is captured in terms of edge images computed using Gradient Vector Flow fields. Invariant moments are then used to record the shape features. The combination of the color, shape and texture features between image and its complement in conjunction with the shape features provide a robust feature set for image retrieval. The experimental results demonstrate the efficacy of the method.

  18. Content-Based Management of Image Databases in the Internet Age

    Science.gov (United States)

    Kleban, James Theodore

    2010-01-01

    The Internet Age has seen the emergence of richly annotated image data collections numbering in the billions of items. This work makes contributions in three primary areas which aid the management of this data: image representation, efficient retrieval, and annotation based on content and metadata. The contributions are as follows. First,…

  19. A Content Based Image Retrieval Approach Based On Principal Regions Detection

    Directory of Open Access Journals (Sweden)

    Mohamed A. Helala

    2012-07-01

    Full Text Available This paper proposes a new region-based image retrieval technique called Principal Regions Image Retrieval (PRIR. This technique starts by segmenting an image to the most general principal regions and applies a fuzzy feature histogram to describe the color and texture properties of each segmented region. The proposed approach starts by generating a nearest neighbor graph for the segmented regions, and applying a greedy graph matching algorithm with a modified scoring function to determine the image rank. The proposed segmentation approach provides significant speedup toward state of the art techniques while keeping accurate precision. Moreover, the proposed approach combines local and global description to improve the retrieval results. Standard image databases are used to evaluate the performance of the proposed system. Results show that the proposed approach enhances the accuracy of retrieval compared to other approaches reported in the literature.

  20. Techniques and applications of content-based image retrieval on the Internet

    Science.gov (United States)

    Xia, Dingyuan; Liu, Shuyu

    2005-01-01

    Aimed at the limitations of existing keywords-based image search engines on Internet, in this paper, a set of solution based on vision features image search engine is presented. At first, referring to the universal system design norm provided in the MPEG-7, the methods for image features description, extraction and index, high effect algorithms for image features similarity measure and fast retrieval are deep researched, and a new representation combined wavelet with relative moments is given. Then, the advantages of artificial intelligence, data mining and optimal information search strategy on Internet are availably used for constructing a prototype system based on vision features image search engine. The experimental results show that the solution is reasonable and feasible.

  1. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    S P Vimal; P K Thiruvikraman

    2012-12-01

    We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the peaks corresponding to the background and the foreground are not widely separated.

  2. Content-Based Image Retrieval Method using the Relative Location of Multiple ROIs

    Directory of Open Access Journals (Sweden)

    LEE, J.

    2011-08-01

    Full Text Available Recently the method of specifying multiple regions of interest (ROI based image retrieval has been suggested. However it measures the similarity of the images without proper consideration of the spatial layouts of the ROIs and thus fails to accurately reflect the intent of the user. In this paper, we propose a new similarity measurement using the relative layouts of the ROIs. The proposed method divides images into blocks of certain size and extracted MPEG-7 dominant colors from the blocks overlapping with the user-designated ROIs to measure their similarities with the target images. At this point, similarity was weighted when the relative location of the ROIs in the query image and the target image was the same. The relative location was calculated by four directions (i.e. up, down, left and right of the basis ROI. The proposed method by an experiment using MPEG-7 XM shows that its performance is higher than the global image retrieval method or the retrieval method that does not consider the relative location of ROIs.

  3. Content-Based Digital Image Retrieval based on Multi-Feature Amalgamation

    Directory of Open Access Journals (Sweden)

    Linhao Li

    2013-12-01

    Full Text Available In actual implementation, digital image retrieval are facing all kinds of problems. There still exists some difficulty in measures and methods for application. Currently there is not a unambiguous algorithm which can directly shown the obvious feature of image content and satisfy the color, scale invariance and rotation invariance of feature simultaneously. So the related technology about image retrieval based on content is analyzed by us. The research focused on global features such as seven HU invariant moments, edge direction histogram and eccentricity. The method for blocked image is also discussed. During the process of image matching, the extracted image features are looked as the points in vector space. The similarity of two images is measured through the closeness between two points and the similarity is calculated by Euclidean distance and the intersection distance of histogram. Then a novel method based on multi-features amalgamation is proposed, to solve the problems in retrieval method for global feature and local feature. It extracts the eccentricity, seven HU invariant moments and edge direction histogram to calculate the similarity distance of each feature of the images, then they are normalized. Contraposing the interior of global feature the weighted feature distance is adopted to form similarity measurement function for retrieval. The features of blocked images are extracted with the partitioning method based on polar coordinate. Finally by the idea of hierarchical retrieval between global feature and local feature, the results are output through global features like invariant moments etc. These results will be taken as the input of local feature match for the second-layer retrieval, which can improve the accuracy of retrieval effectively.

  4. Efficient Use of Semantic Annotation in Content Based Image Retrieval (CBIR

    Directory of Open Access Journals (Sweden)

    V. Khanaa

    2012-03-01

    Full Text Available Finding good image descriptors that can accurately describe the visual aspect of many different classes of images is a challenging task. Such descriptors are easier to compute for specialized databases, where specific prior knowledge can be used to devise a more dedicated description of the image content. On one side, there is rather a subjective problem of the visual content and on the other side there is the very practical need to find a good technical/mathematical description of this same visual content. Since there is no perfect description of visual content (even humans disagree when interpreting images, most methods try to find a good compromise in balancing the different aspects of image content. While image descriptors that concentrate on a single aspect of the visual content (color, shape and texture are widely employed, we believe that image descriptors which include integrated contributions from several aspects perform better in terms of performance and of the relevance of the returned results to the expectation of the user. In this paper, we introduce the color weighted histograms that intimately integrate color and texture or shape and we validate their quality on multiple ground truth databases. We also introduce a new shape histogram based on the Hough transform that performs better than the classical edge orientation histogram. This is an added value which can improve considerably the quality of the overall results when used in combination with the weighted color histograms. In this paper we present the image descriptors (signatures we use in our NWCBIR system and we emphasize the important connection that exists between the image descriptors and the quality of the results returned by the CBIR system.

  5. Content-based image retrieval system for solid waste bin level detection and performance evaluation.

    Science.gov (United States)

    Hannan, M A; Arebey, M; Begum, R A; Basri, Hassan; Al Mamun, Md Abdulla

    2016-04-01

    This paper presents a CBIR system to investigate the use of image retrieval with an extracted texture from the image of a bin to detect the bin level. Various similarity distances like Euclidean, Bhattacharyya, Chi-squared, Cosine, and EMD are used with the CBIR system for calculating and comparing the distance between a query image and the images in a database to obtain the highest performance. In this study, the performance metrics is based on two quantitative evaluation criteria. The first one is the average retrieval rate based on the precision-recall graph and the second is the use of F1 measure which is the weighted harmonic mean of precision and recall. In case of feature extraction, texture is used as an image feature for bin level detection system. Various experiments are conducted with different features extraction techniques like Gabor wavelet filter, gray level co-occurrence matrix (GLCM), and gray level aura matrix (GLAM) to identify the level of the bin and its surrounding area. Intensive tests are conducted among 250 bin images to assess the accuracy of the proposed feature extraction techniques. The average retrieval rate is used to evaluate the performance of the retrieval system. The result shows that, the EMD distance achieved high accuracy and provides better performance than the other distances. PMID:26868844

  6. Multichannel Decoded Local Binary Patterns for Content-Based Image Retrieval.

    Science.gov (United States)

    Dubey, Shiv Ram; Singh, Satish Kumar; Singh, Rajat Kumar

    2016-09-01

    Local binary pattern (LBP) is widely adopted for efficient image feature description and simplicity. To describe the color images, it is required to combine the LBPs from each channel of the image. The traditional way of binary combination is to simply concatenate the LBPs from each channel, but it increases the dimensionality of the pattern. In order to cope with this problem, this paper proposes a novel method for image description with multichannel decoded LBPs. We introduce adder- and decoder-based two schemas for the combination of the LBPs from more than one channel. Image retrieval experiments are performed to observe the effectiveness of the proposed approaches and compared with the existing ways of multichannel techniques. The experiments are performed over 12 benchmark natural scene and color texture image databases, such as Corel-1k, MIT-VisTex, USPTex, Colored Brodatz, and so on. It is observed that the introduced multichannel adder- and decoder-based LBPs significantly improve the retrieval performance over each database and outperform the other multichannel-based approaches in terms of the average retrieval precision and average retrieval rate. PMID:27295674

  7. Rotation and Scale Invariant Wavelet Feature for Content-Based Texture Image Retrieval.

    Science.gov (United States)

    Lee, Moon-Chuen; Pun, Chi-Man

    2003-01-01

    Introduces a rotation and scale invariant log-polar wavelet texture feature for image retrieval. The underlying feature extraction process involves a log-polar transform followed by an adaptive row shift invariant wavelet packet transform. Experimental results show that this rotation and scale invariant wavelet feature is quite effective for image…

  8. Developing a comprehensive system for content-based retrieval of image and text data from a national survey

    Science.gov (United States)

    Antani, Sameer K.; Natarajan, Mukil; Long, Jonathan L.; Long, L. Rodney; Thoma, George R.

    2005-04-01

    The article describes the status of our ongoing R&D at the U.S. National Library of Medicine (NLM) towards the development of an advanced multimedia database biomedical information system that supports content-based image retrieval (CBIR). NLM maintains a collection of 17,000 digitized spinal X-rays along with text survey data from the Second National Health and Nutritional Examination Survey (NHANES II). These data serve as a rich data source for epidemiologists and researchers of osteoarthritis and musculoskeletal diseases. It is currently possible to access these through text keyword queries using our Web-based Medical Information Retrieval System (WebMIRS). CBIR methods developed specifically for biomedical images could offer direct visual searching of these images by means of example image or user sketch. We are building a system which supports hybrid queries that have text and image-content components. R&D goals include developing algorithms for robust image segmentation for localizing and identifying relevant anatomy, labeling the segmented anatomy based on its pathology, developing suitable indexing and similarity matching methods for images and image features, and associating the survey text information for query and retrieval along with the image data. Some highlights of the system developed in MATLAB and Java are: use of a networked or local centralized database for text and image data; flexibility to incorporate new research work; provides a means to control access to system components under development; and use of XML for structured reporting. The article details the design, features, and algorithms in this third revision of this prototype system, CBIR3.

  9. Close Clustering Based Automated Color Image Annotation

    CERN Document Server

    Garg, Ankit; Asawa, Krishna

    2010-01-01

    Most image-search approaches today are based on the text based tags associated with the images which are mostly human generated and are subject to various kinds of errors. The results of a query to the image database thus can often be misleading and may not satisfy the requirements of the user. In this work we propose our approach to automate this tagging process of images, where image results generated can be fine filtered based on a probabilistic tagging mechanism. We implement a tool which helps to automate the tagging process by maintaining a training database, wherein the system is trained to identify certain set of input images, the results generated from which are used to create a probabilistic tagging mechanism. Given a certain set of segments in an image it calculates the probability of presence of particular keywords. This probability table is further used to generate the candidate tags for input images.

  10. Radiographic examination takes on an automated image

    International Nuclear Information System (INIS)

    Automation can be effectively applied to nondestructive testing (NDT). Until recently, film radiography used in NDT was largely a manual process, involving the shooting of a series of x-rays, manually positioned and manually processed. In other words, much radiographic work is being done the way it was over 50 years ago. Significant advances in automation have changed the face of manufacturing, and industry has shared in the benefits brought by such progress. The handling of parts, which was once responsible for a large measure of labor costs, is now assigned to robotic equipment. In nondestructive testing processes, some progress has been achieved in automation - for example, in real-time imaging systems. However, only recently have truly automated NDT begun to emerge. There are two major reasons to introduce automation into NDT - reliability and productivity. Any process or technique that can improve the reliability of parts testing could easily justify the capital investments required

  11. Automated Image Retrieval of Chest CT Images Based on Local Grey Scale Invariant Features.

    Science.gov (United States)

    Arrais Porto, Marcelo; Cordeiro d'Ornellas, Marcos

    2015-01-01

    Textual-based tools are regularly employed to retrieve medical images for reading and interpretation using current retrieval Picture Archiving and Communication Systems (PACS) but pose some drawbacks. All-purpose content-based image retrieval (CBIR) systems are limited when dealing with medical images and do not fit well into PACS workflow and clinical practice. This paper presents an automated image retrieval approach for chest CT images based local grey scale invariant features from a local database. Performance was measured in terms of precision and recall, average retrieval precision (ARP), and average retrieval rate (ARR). Preliminary results have shown the effectiveness of the proposed approach. The prototype is also a useful tool for radiology research and education, providing valuable information to the medical and broader healthcare community. PMID:26262345

  12. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  13. Computer-Aided Diagnosis in Mammography Using Content-Based Image Retrieval Approaches: Current Status and Future Perspectives

    Directory of Open Access Journals (Sweden)

    Bin Zheng

    2009-06-01

    Full Text Available As the rapid advance of digital imaging technologies, the content-based image retrieval (CBIR has became one of the most vivid research areas in computer vision. In the last several years, developing computer-aided detection and/or diagnosis (CAD schemes that use CBIR to search for the clinically relevant and visually similar medical images (or regions depicting suspicious lesions has also been attracting research interest. CBIR-based CAD schemes have potential to provide radiologists with “visual aid” and increase their confidence in accepting CAD-cued results in the decision making. The CAD performance and reliability depends on a number of factors including the optimization of lesion segmentation, feature selection, reference database size, computational efficiency, and relationship between the clinical relevance and visual similarity of the CAD results. By presenting and comparing a number of approaches commonly used in previous studies, this article identifies and discusses the optimal approaches in developing CBIR-based CAD schemes and assessing their performance. Although preliminary studies have suggested that using CBIR-based CAD schemes might improve radiologists’ performance and/or increase their confidence in the decision making, this technology is still in the early development stage. Much research work is needed before the CBIR-based CAD schemes can be accepted in the clinical practice.

  14. Close Clustering Based Automated Color Image Annotation

    OpenAIRE

    Garg, Ankit; Dwivedi, Rahul; Asawa, Krishna

    2010-01-01

    Most image-search approaches today are based on the text based tags associated with the images which are mostly human generated and are subject to various kinds of errors. The results of a query to the image database thus can often be misleading and may not satisfy the requirements of the user. In this work we propose our approach to automate this tagging process of images, where image results generated can be fine filtered based on a probabilistic tagging mechanism. We implement a tool which...

  15. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  16. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics

    Directory of Open Access Journals (Sweden)

    Sharma Harshita

    2012-10-01

    Full Text Available Abstract Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. Methods The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. Results The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. Conclusion The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. Virtual Slides The virtual slide(s for this article can be found here: http

  17. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B.V.Patel

    2012-11-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  18. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B. V. Patel

    2012-10-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  19. Improving performance of content-based image retrieval schemes in searching for similar breast mass regions: an assessment

    International Nuclear Information System (INIS)

    This study aims to assess three methods commonly used in content-based image retrieval (CBIR) schemes and investigate the approaches to improve scheme performance. A reference database involving 3000 regions of interest (ROIs) was established. Among them, 400 ROIs were randomly selected to form a testing dataset. Three methods, namely mutual information, Pearson's correlation and a multi-feature-based k-nearest neighbor (KNN) algorithm, were applied to search for the 15 'the most similar' reference ROIs to each testing ROI. The clinical relevance and visual similarity of searching results were evaluated using the areas under receiver operating characteristic (ROC) curves (AZ) and average mean square difference (MSD) of the mass boundary spiculation level ratings between testing and selected ROIs, respectively. The results showed that the AZ values were 0.893 ± 0.009, 0.606 ± 0.021 and 0.699 ± 0.026 for the use of KNN, mutual information and Pearson's correlation, respectively. The AZ values increased to 0.724 ± 0.017 and 0.787 ± 0.016 for mutual information and Pearson's correlation when using ROIs with the size adaptively adjusted based on actual mass size. The corresponding MSD values were 2.107 ± 0.718, 2.301 ± 0.733 and 2.298 ± 0.743. The study demonstrates that due to the diversity of medical images, CBIR schemes using multiple image features and mass size-based ROIs can achieve significantly improved performance.

  20. Content-based Image Retrieval Using Constrained Independent Component Analysis: Facial Image Retrieval Based on Compound Queries

    OpenAIRE

    Kim, Tae-Seong; Ahmed, Bilal

    2008-01-01

    In this work, we have proposed a new technique of facial image retrieval based on constrained ICA. Our technique requires no offline learning, pre-processing, and feature extraction. The system has been designed so that none of the user-provided information is lost, and in turn more semantically accurate images can be retrieved. As our future work, we would like to test the system in other domains such as the retrieval of chest x-rays and CT images.

  1. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images.

    Science.gov (United States)

    Sparks, Rachel; Madabhushi, Anant

    2016-01-01

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985

  2. Automated Functional Analysis in Dynamic Medical Imaging

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej

    Praha : Katedra matematiky, FSv ČVUT v Praze, 2012, s. 19-20. [Aplikovaná matematika – Rektorysova soutěž. Praha (CZ), 07.12.2012] Institutional support: RVO:67985556 Keywords : Factor Analysis * Dynamic Sequence * Scintigraphy Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/AS/tichy-automated functional analysis in dynamic medical imaging.pdf

  3. Automated image analysis of uterine cervical images

    Science.gov (United States)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  4. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology

    Science.gov (United States)

    Foran, David J; Yang, Lin; Hu, Jun; Goodell, Lauri A; Reiss, Michael; Wang, Fusheng; Kurc, Tahsin; Pan, Tony; Sharma, Ashish; Saltz, Joel H

    2011-01-01

    Objective and design The design and implementation of ImageMiner, a software platform for performing comparative analysis of expression patterns in imaged microscopy specimens such as tissue microarrays (TMAs), is described. ImageMiner is a federated system of services that provides a reliable set of analytical and data management capabilities for investigative research applications in pathology. It provides a library of image processing methods, including automated registration, segmentation, feature extraction, and classification, all of which have been tailored, in these studies, to support TMA analysis. The system is designed to leverage high-performance computing machines so that investigators can rapidly analyze large ensembles of imaged TMA specimens. To support deployment in collaborative, multi-institutional projects, ImageMiner features grid-enabled, service-based components so that multiple instances of ImageMiner can be accessed remotely and federated. Results The experimental evaluation shows that: (1) ImageMiner is able to support reliable detection and feature extraction of tumor regions within imaged tissues; (2) images and analysis results managed in ImageMiner can be searched for and retrieved on the basis of image-based features, classification information, and any correlated clinical data, including any metadata that have been generated to describe the specified tissue and TMA; and (3) the system is able to reduce computation time of analyses by exploiting computing clusters, which facilitates analysis of larger sets of tissue samples. PMID:21606133

  5. Automated image segmentation using information theory

    International Nuclear Information System (INIS)

    Full text: Our development of automated contouring of CT images for RT planning is based on maximum a posteriori (MAP) analyses of region textures, edges, and prior shapes, and assumes stationary Gaussian distributions for voxel textures and contour shapes. Since models may not accurately represent image data, it would be advantageous to compute inferences without relying on models. The relative entropy (RE) from information theory can generate inferences based solely on the similarity of probability distributions. The entropy of a distribution of a random variable X is defined as -Σx p(x)log2p(x) for all the values x which X may assume. The RE (Kullback-Liebler divergence) of two distributions p(X), q(X), over X is Σx p(x)log2{p(x)/q(x)}. The RE is a kind of 'distance' between p,q, equaling zero when p=q and increasing as p,q are more different. Minimum-error MAP and likelihood ratio decision rules have RE equivalents: minimum error decisions obtain with functions of the differences between REs of compared distributions. One applied result is the contour ideally separating two regions is that which maximizes the relative entropy of the two regions' intensities. A program was developed that automatically contours the outlines of patients in stereotactic headframes, a situation most often requiring manual drawing. The relative entropy of intensities inside the contour (patient) versus outside (background) was maximized by conjugate gradient descent over the space of parameters of a deformable contour. shows the computed segmentation of a patient from headframe backgrounds. This program is particularly useful for preparing images for multimodal image fusion. Relative entropy and allied measures of distribution similarity provide automated contouring criteria that do not depend on statistical models of image data. This approach should have wide utility in medical image segmentation applications. Copyright (2001) Australasian College of Physical Scientists and

  6. Automated Quality Assurance Applied to Mammographic Imaging

    Directory of Open Access Journals (Sweden)

    Anne Davis

    2002-07-01

    Full Text Available Quality control in mammography is based upon subjective interpretation of the image quality of a test phantom. In order to suppress subjectivity due to the human observer, automated computer analysis of the Leeds TOR(MAM test phantom is investigated. Texture analysis via grey-level co-occurrence matrices is used to detect structures in the test object. Scoring of the substructures in the phantom is based on grey-level differences between regions and information from grey-level co-occurrence matrices. The results from scoring groups of particles within the phantom are presented.

  7. Content-based image retrieval for brain MRI: An image-searching engine and population-based analysis to utilize past clinical data for future diagnosis

    Directory of Open Access Journals (Sweden)

    Andreia V. Faria

    2015-01-01

    Full Text Available Radiological diagnosis is based on subjective judgment by radiologists. The reasoning behind this process is difficult to document and share, which is a major obstacle in adopting evidence-based medicine in radiology. We report our attempt to use a comprehensive brain parcellation tool to systematically capture image features and use them to record, search, and evaluate anatomical phenotypes. Anatomical images (T1-weighted MRI were converted to a standardized index by using a high-dimensional image transformation method followed by atlas-based parcellation of the entire brain. We investigated how the indexed anatomical data captured the anatomical features of healthy controls and a population with Primary Progressive Aphasia (PPA. PPA was chosen because patients have apparent atrophy at different degrees and locations, thus the automated quantitative results can be compared with trained clinicians' qualitative evaluations. We explored and tested the power of individual classifications and of performing a search for images with similar anatomical features in a database using partial least squares-discriminant analysis (PLS-DA and principal component analysis (PCA. The agreement between the automated z-score and the averaged visual scores for atrophy (r = 0.8 was virtually the same as the inter-evaluator agreement. The PCA plot distribution correlated with the anatomical phenotypes and the PLS-DA resulted in a model with an accuracy of 88% for distinguishing PPA variants. The quantitative indices captured the main anatomical features. The indexing of image data has a potential to be an effective, comprehensive, and easily translatable tool for clinical practice, providing new opportunities to mine clinical databases for medical decision support.

  8. Content-based image retrieval for brain MRI: an image-searching engine and population-based analysis to utilize past clinical data for future diagnosis.

    Science.gov (United States)

    Faria, Andreia V; Oishi, Kenichi; Yoshida, Shoko; Hillis, Argye; Miller, Michael I; Mori, Susumu

    2015-01-01

    Radiological diagnosis is based on subjective judgment by radiologists. The reasoning behind this process is difficult to document and share, which is a major obstacle in adopting evidence-based medicine in radiology. We report our attempt to use a comprehensive brain parcellation tool to systematically capture image features and use them to record, search, and evaluate anatomical phenotypes. Anatomical images (T1-weighted MRI) were converted to a standardized index by using a high-dimensional image transformation method followed by atlas-based parcellation of the entire brain. We investigated how the indexed anatomical data captured the anatomical features of healthy controls and a population with Primary Progressive Aphasia (PPA). PPA was chosen because patients have apparent atrophy at different degrees and locations, thus the automated quantitative results can be compared with trained clinicians' qualitative evaluations. We explored and tested the power of individual classifications and of performing a search for images with similar anatomical features in a database using partial least squares-discriminant analysis (PLS-DA) and principal component analysis (PCA). The agreement between the automated z-score and the averaged visual scores for atrophy (r = 0.8) was virtually the same as the inter-evaluator agreement. The PCA plot distribution correlated with the anatomical phenotypes and the PLS-DA resulted in a model with an accuracy of 88% for distinguishing PPA variants. The quantitative indices captured the main anatomical features. The indexing of image data has a potential to be an effective, comprehensive, and easily translatable tool for clinical practice, providing new opportunities to mine clinical databases for medical decision support. PMID:25685706

  9. Computerized Station For Semi-Automated Testing Image Intensifier Tubes

    OpenAIRE

    Chrzanowski Krzysztof

    2015-01-01

    Testing of image intensifier tubes is still done using mostly manual methods due to a series of both technical and legal problems with test automation. Computerized stations for semi-automated testing of IITs are considered as novelty and are under continuous improvements. This paper presents a novel test station that enables semi-automated measurement of image intensifier tubes. Wide test capabilities and advanced design solutions rise the developed test station significantly above the curre...

  10. Automated quantitative image analysis of nanoparticle assembly

    Science.gov (United States)

    Murthy, Chaitanya R.; Gao, Bo; Tao, Andrea R.; Arya, Gaurav

    2015-05-01

    The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated manner. The software outputs averages and distributions in the size, radius of gyration, fractal dimension, backbone length, end-to-end distance, anisotropic ratio, and aspect ratio of NP clusters as a function of time along with bootstrapped error bounds for all calculated properties. The polydispersity in the NP building blocks and biases in the sampling of NP clusters are accounted for through the use of probabilistic weights. This software, named Particle Image Characterization Tool (PICT), has been made publicly available and could be an invaluable resource for researchers studying NP assembly. To demonstrate its practical utility, we used PICT to analyze scanning electron microscopy images taken during the assembly of surface-functionalized metal NPs of differing shapes and sizes within a polymer matrix. PICT is used to characterize and analyze the morphology of NP clusters, providing quantitative information that can be used to elucidate the physical mechanisms governing NP assembly.The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated

  11. Automated object detection for astronomical images

    Science.gov (United States)

    Orellana, Sonny; Zhao, Lei; Boussalis, Helen; Liu, Charles; Rad, Khosrow; Dong, Jane

    2005-10-01

    Sponsored by the National Aeronautical Space Association (NASA), the Synergetic Education and Research in Enabling NASA-centered Academic Development of Engineers and Space Scientists (SERENADES) Laboratory was established at California State University, Los Angeles (CSULA). An important on-going research activity in this lab is to develop an easy-to-use image analysis software with the capability of automated object detection to facilitate astronomical research. This paper presented a fast object detection algorithm based on the characteristics of astronomical images. This algorithm consists of three steps. First, the foreground and background are separated using histogram-based approach. Second, connectivity analysis is conducted to extract individual object. The final step is post processing which refines the detection results. To improve the detection accuracy when some objects are blocked by clouds, top-hat transform is employed to split the sky into cloudy region and non-cloudy region. A multi-level thresholding algorithm is developed to select the optimal threshold for different regions. Experimental results show that our proposed approach can successfully detect the blocked objects by clouds.

  12. 基于内容的图像检索技术综述%A Survey of Content-Based Image Retrieval Technology

    Institute of Scientific and Technical Information of China (English)

    韦立梅; 苏兵

    2012-01-01

    随着计算机网络与多媒体技术的飞速发展,基于文本的传统信息检索方式已经不再满足人们的需要,因此,基于内容的图像检索方式越来越受到人们的青睐,并成为研究的热点。本文首先对基于内容的图像检索进行了介绍,综述了基于颜色、纹理、形状、语义等图像检索相关技术;最后对图像检索系统,及其在目前的应用进行了简要的说明。%With the rapid development of computer network and multimedia technology, the traditional text-based information retrieval techniques are no longer meet the needs of the people, therefore, content-based image retrieval has become increasingly favored, and becomes a research topic. In this paper, content-based image retrieval are introduced, some main techniques of CBIR are discussed, including color index, shape index, texture index and semantic index. Finally, a brief description of image retrieval system and its current application are introduced.

  13. Content Based Medical Image Retrieval with Texture Content Using Gray Level Co-occurrence Matrix and K-Means Clustering Algorithms

    Directory of Open Access Journals (Sweden)

    K. R. Chandran

    2012-01-01

    Full Text Available Problem statement: Recently, there has been a huge progress in collection of varied image databases in the form of digital. Most of the users found it difficult to search and retrieve required images in large collections. In order to provide an effective and efficient search engine tool, the system has been implemented. In image retrieval system, there is no methodologies have been considered directly to retrieve the images from databases. Instead of that, various visual features that have been considered indirect to retrieve the images from databases. In this system, one of the visual features such as texture that has been considered indirectly into images to extract the feature of the image. That featured images only have been considered for the retrieval process in order to retrieve exact desired images from the databases. Approach: The aim of this study is to construct an efficient image retrieval tool namely, “Content Based Medical Image Retrieval with Texture Content using Gray Level Co-occurrence Matrix (GLCM and k-Means Clustering algorithms”. This image retrieval tool is capable of retrieving images based on the texture feature of the image and it takes into account the Pre-processing, feature extraction, Classification and retrieval steps in order to construct an efficient retrieval tool. The main feature of this tool is used of GLCM of the extracting texture pattern of the image and k-means clustering algorithm for image classification in order to improve retrieval efficiency. The proposed image retrieval system consists of three stages i.e., segmentation, texture feature extraction and clustering process. In the segmentation process, preprocessing step to segment the image into blocks is carried out. A reduction in an image region to be processed is carried out in the texture feature extraction process and finally, the extracted image is clustered using the k-means algorithm. The proposed system is employed for domain

  14. An automated digital imaging system for environmental monitoring applications

    Science.gov (United States)

    Bogle, Rian; Velasco, Miguel; Vogel, John

    2013-01-01

    Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.

  15. Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system.

    Science.gov (United States)

    Widmer, Antoine; Schaer, Roger; Markonis, Dimitrios; Muller, Henning

    2014-01-01

    Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process. PMID:25570993

  16. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  17. Computerized Station For Semi-Automated Testing Image Intensifier Tubes

    Directory of Open Access Journals (Sweden)

    Chrzanowski Krzysztof

    2015-09-01

    Full Text Available Testing of image intensifier tubes is still done using mostly manual methods due to a series of both technical and legal problems with test automation. Computerized stations for semi-automated testing of IITs are considered as novelty and are under continuous improvements. This paper presents a novel test station that enables semi-automated measurement of image intensifier tubes. Wide test capabilities and advanced design solutions rise the developed test station significantly above the current level of night vision metrology.

  18. Automated feature extraction and classification from image sources

    Science.gov (United States)

    U.S. Geological Survey

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  19. Image segmentation for automated dental identification

    Science.gov (United States)

    Haj Said, Eyad; Nassar, Diaa Eldin M.; Ammar, Hany H.

    2006-02-01

    Dental features are one of few biometric identifiers that qualify for postmortem identification; therefore, creation of an Automated Dental Identification System (ADIS) with goals and objectives similar to the Automated Fingerprint Identification System (AFIS) has received increased attention. As a part of ADIS, teeth segmentation from dental radiographs films is an essential step in the identification process. In this paper, we introduce a fully automated approach for teeth segmentation with goal to extract at least one tooth from the dental radiograph film. We evaluate our approach based on theoretical and empirical basis, and we compare its performance with the performance of other approaches introduced in the literature. The results show that our approach exhibits the lowest failure rate and the highest optimality among all full automated approaches introduced in the literature.

  20. Estimation of lunar titanium content: Based on absorption features of Chang’E-1 interference imaging spectrometer (ⅡM)

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Two linear regression models based on absorption features extracted from CE-1 IIM image data are presented to discuss the relationship between absorption features and titanium content. We computed five absorption parameters (Full Wave at Half Maximum (FWHM), absorption position, absorption area, absorption depth and absorption asymmetry) of the spectra collected at Apollo 17 landing sites to build two regression models, one with FWHM and the other without FWHM due to the low relation coefficient between FWHM and Ti content. Finally Ti content measured from Apollo 17 samples and Apollo 16 samples was used to test the accuracy. The results show that the predicted values of the model with FWHM have many singular values and the result of model without FWHM is more stable. The two models are relatively accurate for high-Ti districts, while seem inexact and disable for low-Ti districts.

  1. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Science.gov (United States)

    Beijbom, Oscar; Edmunds, Peter J; Roelfsema, Chris; Smith, Jennifer; Kline, David I; Neal, Benjamin P; Dunlap, Matthew J; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B Greg; Kriegman, David

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157

  2. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Directory of Open Access Journals (Sweden)

    Oscar Beijbom

    Full Text Available Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.

  3. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

    OpenAIRE

    Oscar Beijbom; Edmunds, Peter J.; Chris Roelfsema; Jennifer Smith; Kline, David I.; Neal, Benjamin P.; Matthew J Dunlap; Vincent Moriarty; Tung-Yung Fan; Chih-Jui Tan; Stephen Chan; Tali Treibitz; Anthony Gamst; B. Greg Mitchell; David Kriegman

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images capture...

  4. Automated identification of animal species in camera trap images

    NARCIS (Netherlands)

    Yu, X.; Wang, J.; Kays, R.; Jansen, P.A.; Wang, T.; Huang, T.

    2013-01-01

    Image sensors are increasingly being used in biodiversity monitoring, with each study generating many thousands or millions of pictures. Efficiently identifying the species captured by each image is a critical challenge for the advancement of this field. Here, we present an automated species identif

  5. Automated diabetic retinopathy imaging in Indian eyes: A pilot study

    Directory of Open Access Journals (Sweden)

    Rupak Roy

    2014-01-01

    Full Text Available Aim: To evaluate the efficacy of an automated retinal image grading system in diabetic retinopathy (DR screening. Materials and Methods: Color fundus images of patients of a DR screening project were analyzed for the purpose of the study. For each eye two set of images were acquired, one centerd on the disk and the other centerd on the macula. All images were processed by automated DR screening software (Retmarker. The results were compared to ophthalmologist grading of the same set of photographs. Results: 5780 images of 1445 patients were analyzed. Patients were screened into two categories DR or no DR. Image quality was high, medium and low in 71 (4.91%, 1117 (77.30% and 257 (17.78% patients respectively. Specificity and sensitivity for detecting DR in the high, medium and low group were (0.59, 0.91; (0.11, 0.95 and (0.93, 0.14. Conclusion: Automated retinal image screening system for DR had a high sensitivity in high and medium quality images. Automated DR grading software′s hold promise in future screening programs.

  6. Automating proliferation rate estimation from Ki-67 histology images

    Science.gov (United States)

    Al-Lahham, Heba Z.; Alomari, Raja S.; Hiary, Hazem; Chaudhary, Vipin

    2012-03-01

    Breast cancer is the second cause of women death and the most diagnosed female cancer in the US. Proliferation rate estimation (PRE) is one of the prognostic indicators that guide the treatment protocols and it is clinically performed from Ki-67 histopathology images. Automating PRE substantially increases the efficiency of the pathologists. Moreover, presenting a deterministic and reproducible proliferation rate value is crucial to reduce inter-observer variability. To that end, we propose a fully automated CAD system for PRE from the Ki-67 histopathology images. This CAD system is based on a model of three steps: image pre-processing, image clustering, and nuclei segmentation and counting that are finally followed by PRE. The first step is based on customized color modification and color-space transformation. Then, image pixels are clustered by K-Means depending on the features extracted from the images derived from the first step. Finally, nuclei are segmented and counted using global thresholding, mathematical morphology and connected component analysis. Our experimental results on fifty Ki-67-stained histopathology images show a significant agreement between our CAD's automated PRE and the gold standard's one, where the latter is an average between two observers' estimates. The Paired T-Test, for the automated and manual estimates, shows ρ = 0.86, 0.45, 0.8 for the brown nuclei count, blue nuclei count, and proliferation rate, respectively. Thus, our proposed CAD system is as reliable as the pathologist estimating the proliferation rate. Yet, its estimate is reproducible.

  7. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  8. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  9. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  10. Fuzzy emotional semantic analysis and automated annotation of scene images.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  11. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  12. Automated Localization of Optic Disc in Retinal Images

    Directory of Open Access Journals (Sweden)

    Deepali A.Godse

    2013-03-01

    Full Text Available An efficient detection of optic disc (OD in colour retinal images is a significant task in an automated retinal image analysis system. Most of the algorithms developed for OD detection are especially applicable to normal and healthy retinal images. It is a challenging task to detect OD in all types of retinal images, that is, normal, healthy images as well as abnormal, that is, images affected due to disease. This paper presents an automated system to locate an OD and its centre in all types of retinal images. The ensemble of steps based on different criteria produces more accurate results. The proposed algorithm gives excellent results and avoids false OD detection. The technique is developed and tested on standard databases provided for researchers on internet, Diaretdb0 (130 images, Diaretdb1 (89 images, Drive (40 images and local database (194 images. The local database images are collected from ophthalmic clinics. It is able to locate OD and its centre in 98.45% of all tested cases. The results achieved by different algorithms can be compared when algorithms are applied on same standard databases. This comparison is also discussed in this paper which shows that the proposed algorithm is more efficient.

  13. Automated image capture and defects detection by cavity inspection camera

    International Nuclear Information System (INIS)

    The defects as pit and scar make electric/magnetic field enhance and it cause field emission and quench in superconducting cavities. We used inspection camera to find these defects, but the current system which operated by human often mistake file naming and require long acquisition time. This study aims to solve these problems with introduction of cavity driving automation and defect inspection. We used rs232c of serial communication to drive of motor and camera for the automation of the inspection camera, and we used defect inspection software with defects reference images and pattern match software with the OpenCV lib. By the automation, we cut down the acquisition time from 8 hours to 2 hours, however defect inspection software is under preparation. The defect inspection software has a problem of complexity of image back ground. (author)

  14. Automated Segmentation of Cardiac Magnetic Resonance Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Nilsson, Jens Chr.; Grønning, Bjørn A.

    2001-01-01

    Magnetic resonance imaging (MRI) has been shown to be an accurate and precise technique to assess cardiac volumes and function in a non-invasive manner and is generally considered to be the current gold-standard for cardiac imaging [1]. Measurement of ventricular volumes, muscle mass and function...

  15. Automated morphometry of transgenic mouse brains in MR images

    NARCIS (Netherlands)

    Scheenstra, Alize Elske Hiltje

    2011-01-01

    Quantitative and local morphometry of mouse brain MRI is a relatively new field of research, where automated methods can be exploited to rapidly provide accurate and repeatable results. In this thesis we reviewed several existing methods and applications of quantitative morphometry to brain MR image

  16. Automated radiopharmaceutical production systems for positron imaging

    International Nuclear Information System (INIS)

    This study provides information that will lead towards the widespread availability of systems for routine production of positron emitting isotopes and radiopharmaceuticals in a medical setting. The first part describes the collection, evaluation, and preparation in convenient form of the pertinent physical, engineering, and chemical data related to reaction yields and isotope production. The emphasis is on the production of the four short-lived isotopes C-11, N-13, O-15 and F-18. The second part is an assessment of radiation sources including cyclotrons, linear accelerators, and other more exotic devices. Various aspects of instrumentation including ease of installation, cost, and shielding are included. The third part of the study reviews the preparation of precursors and radiopharmaceuticals by automated chemical systems. 182 refs., 3 figs., 15 tabs

  17. An automated and simple method for brain MR image extraction

    OpenAIRE

    Zhu Zixin; Liu Jiafeng; Zhang Haiyan; Li Haiyun

    2011-01-01

    Abstract Background The extraction of brain tissue from magnetic resonance head images, is an important image processing step for the analyses of neuroimage data. The authors have developed an automated and simple brain extraction method using an improved geometric active contour model. Methods The method uses an improved geometric active contour model which can not only solve the boundary leakage problem but also is less sensitive to intensity inhomogeneity. The method defines the initial fu...

  18. An automated vessel segmentation of retinal images using multiscale vesselness

    International Nuclear Information System (INIS)

    The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. In this paper, we introduce an implementation of the anisotropic diffusion which allows reducing the noise and better preserving small structures like vessels in 2D images. A vessel detection filter, based on a multi-scale vesselness function, is then applied to enhance vascular structures.

  19. Automated Archiving of Archaeological Aerial Images

    Directory of Open Access Journals (Sweden)

    Michael Doneus

    2016-03-01

    Full Text Available The main purpose of any aerial photo archive is to allow quick access to images based on content and location. Therefore, next to a description of technical parameters and depicted content, georeferencing of every image is of vital importance. This can be done either by identifying the main photographed object (georeferencing of the image content or by mapping the center point and/or the outline of the image footprint. The paper proposes a new image archiving workflow. The new pipeline is based on the parameters that are logged by a commercial, but cost-effective GNSS/IMU solution and processed with in-house-developed software. Together, these components allow one to automatically geolocate and rectify the (oblique aerial images (by a simple planar rectification using the exterior orientation parameters and to retrieve their footprints with reasonable accuracy, which is automatically stored as a vector file. The data of three test flights were used to determine the accuracy of the device, which turned out to be better than 1° for roll and pitch (mean between 0.0 and 0.21 with a standard deviation of 0.17–0.46 and better than 2.5° for yaw angles (mean between 0.0 and −0.14 with a standard deviation of 0.58–0.94. This turned out to be sufficient to enable a fast and almost automatic GIS-based archiving of all of the imagery.

  20. Automated planning of breast radiotherapy using cone beam CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, University of Toronto, Toronto, Ontario M5G 1P5 (Canada)

    2015-02-15

    Purpose: Develop and clinically validate a methodology for using cone beam computed tomography (CBCT) imaging in an automated treatment planning framework for breast IMRT. Methods: A technique for intensity correction of CBCT images was developed and evaluated. The technique is based on histogram matching of CBCT image sets, using information from “similar” planning CT image sets from a database of paired CBCT and CT image sets (n = 38). Automated treatment plans were generated for a testing subset (n = 15) on the planning CT and the corrected CBCT. The plans generated on the corrected CBCT were compared to the CT-based plans in terms of beam parameters, dosimetric indices, and dose distributions. Results: The corrected CBCT images showed considerable similarity to their corresponding planning CTs (average mutual information 1.0±0.1, average sum of absolute differences 185 ± 38). The automated CBCT-based plans were clinically acceptable, as well as equivalent to the CT-based plans with average gantry angle difference of 0.99°±1.1°, target volume overlap index (Dice) of 0.89±0.04 although with slightly higher maximum target doses (4482±90 vs 4560±84, P < 0.05). Gamma index analysis (3%, 3 mm) showed that the CBCT-based plans had the same dose distribution as plans calculated with the same beams on the registered planning CTs (average gamma index 0.12±0.04, gamma <1 in 99.4%±0.3%). Conclusions: The proposed method demonstrates the potential for a clinically feasible and efficient online adaptive breast IMRT planning method based on CBCT imaging, integrating automation.

  1. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics

    OpenAIRE

    Sharma Harshita; Alekseychuk Alexander; Leskovsky Peter; Hellwich Olaf; Anand RS; Zerbe Norman; Hufnagl Peter

    2012-01-01

    Abstract Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image proce...

  2. An Automated Image Processing System for Concrete Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-11-23

    AlliedSignal Federal Manufacturing & Technologies (FM&T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of "pixels" which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented.

  3. An Automated, Image Processing System for Concrete Evaluation

    International Nuclear Information System (INIS)

    Allied Signal Federal Manufacturing ampersand Technologies (FM ampersand T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of ''pixels'' which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented

  4. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  5. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  6. SAND: Automated VLBI imaging and analyzing pipeline

    Science.gov (United States)

    Zhang, Ming

    2016-05-01

    The Search And Non-Destroy (SAND) is a VLBI data reduction pipeline composed of a set of Python programs based on the AIPS interface provided by ObitTalk. It is designed for the massive data reduction of multi-epoch VLBI monitoring research. It can automatically investigate calibrated visibility data, search all the radio emissions above a given noise floor and do the model fitting either on the CLEANed image or directly on the uv data. It then digests the model-fitting results, intelligently identifies the multi-epoch jet component correspondence, and recognizes the linear or non-linear proper motion patterns. The outputs including CLEANed image catalogue with polarization maps, animation cube, proper motion fitting and core light curves. For uncalibrated data, a user can easily add inline modules to do the calibration and self-calibration in a batch for a specific array.

  7. Content-based image retrieval for brain MRI: An image-searching engine and population-based analysis to utilize past clinical data for future diagnosis

    OpenAIRE

    Faria, Andreia V.; Kenichi Oishi; Shoko Yoshida; Argye Hillis; Miller, Michael I; Susumu Mori

    2015-01-01

    Radiological diagnosis is based on subjective judgment by radiologists. The reasoning behind this process is difficult to document and share, which is a major obstacle in adopting evidence-based medicine in radiology. We report our attempt to use a comprehensive brain parcellation tool to systematically capture image features and use them to record, search, and evaluate anatomical phenotypes. Anatomical images (T1-weighted MRI) were converted to a standardized index by using a high-dimensiona...

  8. A Content-based Analysis of Shahriar's Azerbaijani Turkish Poem Getmə Tərsa Balası (A Christian Child in Terms of Religious Images and Interpretations

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Mozaheb

    2016-03-01

    Full Text Available The present study aims to analyze Shahriar's Getmə Tərsa Balası (do not leave me the Christian Child poem in terms of religious images using content-based analysis. Initially, the poem published in Azerbaijani Turkish has been analyzed by the researchers to find out the main religious themes covering Islam and Christianity. Then, a number of images created by the poet, including Hell and Heaven, mosque versus church and Mount Sinai, have been extracted and discussed in detail by using the English translation of the verses. Finally, the results have been presented using the extracted themes. The findings showed that Shahriar started his poem from a worldly image in order to reach divine images.Keywords: Shahriar, Getmə Tərsa Balası (A Christian Child, Content-based analysis, Azerbaijani Turkish 

  9. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  10. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  11. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  12. Automated techniques for quality assurance of radiological image modalities

    Science.gov (United States)

    Goodenough, David J.; Atkins, Frank B.; Dyer, Stephen M.

    1991-05-01

    This paper will attempt to identify many of the important issues for quality assurance (QA) of radiological modalities. It is of course to be realized that QA can span many aspects of the diagnostic decision making process. These issues range from physical image performance levels to and through the diagnostic decision of the radiologist. We will use as a model for automated approaches a program we have developed to work with computed tomography (CT) images. In an attempt to unburden the user, and in an effort to facilitate the performance of QA, we have been studying automated approaches. The ultimate utility of the system is its ability to render in a safe and efficacious manner, decisions that are accurate, sensitive, specific and which are possible within the economic constraints of modern health care delivery.

  13. Content-based Multi-media Retrieval Technology

    OpenAIRE

    Wang, Yi

    2012-01-01

    This paper gives a summary of the content-based Image Retrieval and Content-based Audio Retrieval, which are two parts of the Content-based Retrieval. Content-based Retrieval is the retrieval based on the features of the content. Generally, it is a way to extract features of the media data and find other data with the similar features from the database automatically. Content-based Retrieval can not only work on discrete media like texts, but also can be used on continuous media, such as video...

  14. Usefulness of automated biopsy guns in image-guided biopsy

    International Nuclear Information System (INIS)

    To evaluate the usefulness of automated biopsy guns in image-guided biopsy of lung, liver, pancreas and other organs. Using automated biopsy devices, 160 biopsies of variable anatomic sites were performed: Biopsies were performed under ultrasonographic(US) guidance in 95 and computed tomographic (CT) guidance in 65. We retrospectively analyzed histologic results and complications. Specimens were adequate for histopathologic diagnosis in 143 of the 160 patients(89.4%)-Diagnostic tissue was obtained in 130 (81.3%), suggestive tissue obtained in 13(8.1%), and non-diagnostic tissue was obtained in 14(8.7%). Inadequate tissue was obtained in only 3(1.9%). There was no statistically significant difference between US-guided and CT-guided percutaneous biopsy. There was no occurrence of significant complication. We have experienced mild complications in only 5 patients-2 hematuria and 2 hematochezia in transrectal prostatic biopsy, and 1 minimal pneumothorax in CT-guided percutaneous lung biopsy. All of them were resolved spontaneously. The image-guided biopsy using the automated biopsy gun was a simple, safe and accurate method of obtaining adequate specimen for the histopathologic diagnosis

  15. AUTOMATED DATA ANALYSIS FOR CONSECUTIVE IMAGES FROM DROPLET COMBUSTION EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Christopher Lee Dembia

    2012-09-01

    Full Text Available A simple automated image analysis algorithm has been developed that processes consecutive images from high speed, high resolution digital images of burning fuel droplets. The droplets burn under conditions that promote spherical symmetry. The algorithm performs the tasks of edge detection of the droplet’s boundary using a grayscale intensity threshold, and shape fitting either a circle or ellipse to the droplet’s boundary. The results are compared to manual measurements of droplet diameters done with commercial software. Results show that it is possible to automate data analysis for consecutive droplet burning images even in the presence of a significant amount of noise from soot formation. An adaptive grayscale intensity threshold provides the ability to extract droplet diameters for the wide range of noise encountered. In instances where soot blocks portions of the droplet, the algorithm manages to provide accurate measurements if a circle fit is used instead of an ellipse fit, as an ellipse can be too accommodating to the disturbance.

  16. Automated retinal image analysis for diabetic retinopathy in telemedicine.

    Science.gov (United States)

    Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S

    2015-03-01

    There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773

  17. Automated curved planar reformation of 3D spine images

    International Nuclear Information System (INIS)

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  18. Automated Dsm Extraction from Uav Images and Performance Analysis

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2015-08-01

    As technology evolves, unmanned aerial vehicles (UAVs) imagery is being used from simple applications such as image acquisition to complicated applications such as 3D spatial information extraction. Spatial information is usually provided in the form of a DSM or point cloud. It is important to generate very dense tie points automatically from stereo images. In this paper, we tried to apply stereo image-based matching technique developed for satellite/aerial images to UAV images, propose processing steps for automated DSM generation and to analyse the possibility of DSM generation. For DSM generation from UAV images, firstly, exterior orientation parameters (EOPs) for each dataset were adjusted. Secondly, optimum matching pairs were determined. Thirdly, stereo image matching was performed with each pair. Developed matching algorithm is based on grey-level correlation on pixels applied along epipolar lines. Finally, the extracted match results were united with one result and the final DSM was made. Generated DSM was compared with a reference DSM from Lidar. Overall accuracy was 1.5 m in NMAD. However, several problems have to be solved in future, including obtaining precise EOPs, handling occlusion and image blurring problems. More effective interpolation technique needs to be developed in the future.

  19. Automating the Photogrammetric Bridging Based on MMS Image Sequence Processing

    Science.gov (United States)

    Silva, J. F. C.; Lemes Neto, M. C.; Blasechi, V.

    2014-11-01

    The photogrammetric bridging or traverse is a special bundle block adjustment (BBA) for connecting a sequence of stereo-pairs and of determining the exterior orientation parameters (EOP). An object point must be imaged in more than one stereo-pair. In each stereo-pair the distance ratio between an object and its corresponding image point varies significantly. We propose to automate the photogrammetric bridging based on a fully automatic extraction of homologous points in stereo-pairs and on an arbitrary Cartesian datum to refer the EOP and tie points. The technique uses SIFT algorithm and the keypoint matching is given by similarity descriptors of each keypoint based on the smallest distance. All the matched points are used as tie points. The technique was applied initially to two pairs. The block formed by four images was treated by BBA. The process follows up to the end of the sequence and it is semiautomatic because each block is processed independently and the transition from one block to the next depends on the operator. Besides four image blocks (two pairs), we experimented other arrangements with block sizes of six, eight, and up to twenty images (respectively, three, four, five and up to ten bases). After the whole image pairs sequence had sequentially been adjusted in each experiment, a simultaneous BBA was run so to estimate the EOP set of each image. The results for classical ("normal case") pairs were analyzed based on standard statistics regularly applied to phototriangulation, and they show figures to validate the process.

  20. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla;

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...... agreement between the four pathologists and the VG app was κ=0.71. CONCLUSION: In conclusion, the Visiopharm VG app is able to measure the thickness of a sub-epithelial collagenous band in colon biopsies with an accuracy comparable to the performance of a pathologist and thereby provides a promising...

  1. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  2. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  3. Automated angiogenesis quantification through advanced image processing techniques.

    Science.gov (United States)

    Doukas, Charlampos N; Maglogiannis, Ilias; Chatziioannou, Aristotle; Papapetropoulos, Andreas

    2006-01-01

    Angiogenesis, the formation of blood vessels in tumors, is an interactive process between tumor, endothelial and stromal cells in order to create a network for oxygen and nutrients supply, necessary for tumor growth. According to this, angiogenic activity is considered a suitable method for both tumor growth or inhibition detection. The angiogenic potential is usually estimated by counting the number of blood vessels in particular sections. One of the most popular assay tissues to study the angiogenesis phenomenon is the developing chick embryo and its chorioallantoic membrane (CAM), which is a highly vascular structure lining the inner surface of the egg shell. The aim of this study was to develop and validate an automated image analysis method that would give an unbiased quantification of the micro-vessel density and growth in angiogenic CAM images. The presented method has been validated by comparing automated results to manual counts over a series of digital chick embryo photos. The results indicate the high accuracy of the tool, which has been thus extensively used for tumor growth detection at different stages of embryonic development. PMID:17946107

  4. Pancreas++ : Automated Quantification of Pancreatic Islet Cells in Microscopy Images

    Directory of Open Access Journals (Sweden)

    StuartMaudsley

    2013-01-01

    Full Text Available The microscopic image analysis of pancreatic Islet of Langerhans morphology is crucial for the investigation of diabetes and metabolic diseases. Besides the general size of the islet, the percentage and relative position of glucagon-containing alpha-, and insulin-containing beta-cells is also important for pathophysiological analyses, especially in rodents. Hence, the ability to identify, quantify and spatially locate peripheral and ‘involuted’ alpha-cells in the islet core is an important analytical goal. There is a dearth of software available for the automated and sophisticated positional-quantification of multiple cell types in the islet core. Manual analytical methods for these analyses, while relatively accurate, can suffer from a slow throughput rate as well as user-based biases. Here we describe a newly developed pancreatic islet analytical software program, Pancreas++, which facilitates the fully-automated, non-biased, and highly reproducible investigation of islet area and alpha- and beta-cell quantity as well as position within the islet for either single or large batches of fluorescent images. We demonstrate the utility and accuracy of Pancreas++ by comparing its performance to other pancreatic islet size and cell type (alpha, beta quantification methods. Our Pancreas++ analysis was significantly faster than other methods, while still retaining low error rates and a high degree of result correlation with the manually generated reference standard.

  5. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  6. An automated deformable image registration evaluation of confidence tool

    Science.gov (United States)

    Kirby, Neil; Chen, Josephine; Kim, Hojin; Morin, Olivier; Nie, Ke; Pouliot, Jean

    2016-04-01

    Deformable image registration (DIR) is a powerful tool for radiation oncology, but it can produce errors. Beyond this, DIR accuracy is not a fixed quantity and varies on a case-by-case basis. The purpose of this study is to explore the possibility of an automated program to create a patient- and voxel-specific evaluation of DIR accuracy. AUTODIRECT is a software tool that was developed to perform this evaluation for the application of a clinical DIR algorithm to a set of patient images. In brief, AUTODIRECT uses algorithms to generate deformations and applies them to these images (along with processing) to generate sets of test images, with known deformations that are similar to the actual ones and with realistic noise properties. The clinical DIR algorithm is applied to these test image sets (currently 4). From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. In this study, four commercially available DIR algorithms were used to deform a dose distribution associated with a virtual pelvic phantom image set, and AUTODIRECT was used to generate dose uncertainty estimates for each deformation. The virtual phantom image set has a known ground-truth deformation, so the true dose-warping errors of the DIR algorithms were also known. AUTODIRECT predicted error patterns that closely matched the actual error spatial distribution. On average AUTODIRECT overestimated the magnitude of the dose errors, but tuning the AUTODIRECT algorithms should improve agreement. This proof-of-principle test demonstrates the potential for the AUTODIRECT algorithm as an empirical method to predict DIR errors.

  7. Scanning probe image wizard: A toolbox for automated scanning probe microscopy data analysis

    Science.gov (United States)

    Stirling, Julian; Woolley, Richard A. J.; Moriarty, Philip

    2013-11-01

    We describe SPIW (scanning probe image wizard), a new image processing toolbox for SPM (scanning probe microscope) images. SPIW can be used to automate many aspects of SPM data analysis, even for images with surface contamination and step edges present. Specialised routines are available for images with atomic or molecular resolution to improve image visualisation and generate statistical data on surface structure.

  8. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  9. Image Processing for Automated Analysis of the Fluorescence In-Situ Hybridization (FISH) Microscopic Images

    Czech Academy of Sciences Publication Activity Database

    Schier, Jan; Kovář, Bohumil; Kočárek, E.; Kuneš, Michal

    Berlin Heidelberg: Springer-Verlag, 2011, s. 622-633. (Lecture Notes in Computer Science ). ISBN 978-3-642-24081-2. [5th International Conference, ICHIT 2011. Daejeon (KR), 22.09.2011-24.09.2011] R&D Projects: GA TA ČR TA01010931 Institutional research plan: CEZ:AV0Z10750506 Keywords : fluorescence in-situ hybridization * image processing * image segmentation Subject RIV: IN - Informatics, Computer Science http://library.utia.cas.cz/separaty/2011/ZS/shier-image processing for automated analysis of the fluorescence in-situ hybridization (fish) microscopic images.pdf

  10. Image-based path planning for automated virtual colonoscopy navigation

    Science.gov (United States)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  11. Texture-Based Automated Lithological Classification Using Aeromagenetic Anomaly Images

    Science.gov (United States)

    Shankar, Vivek

    2009-01-01

    This report consists of a thesis submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Master of Science, Graduate College, The University of Arizona, 2004 Aeromagnetic anomaly images are geophysical prospecting tools frequently used in the exploration of metalliferous minerals and hydrocarbons. The amplitude and texture content of these images provide a wealth of information to geophysicists who attempt to delineate the nature of the Earth's upper crust. These images prove to be extremely useful in remote areas and locations where the minerals of interest are concealed by basin fill. Typically, geophysicists compile a suite of aeromagnetic anomaly images, derived from amplitude and texture measurement operations, in order to obtain a qualitative interpretation of the lithological (rock) structure. Texture measures have proven to be especially capable of capturing the magnetic anomaly signature of unique lithological units. We performed a quantitative study to explore the possibility of using texture measures as input to a machine vision system in order to achieve automated classification of lithological units. This work demonstrated a significant improvement in classification accuracy over random guessing based on a priori probabilities. Additionally, a quantitative comparison between the performances of five classes of texture measures in their ability to discriminate lithological units was achieved.

  12. Automated target recognition technique for image segmentation and scene analysis

    Science.gov (United States)

    Baumgart, Chris W.; Ciarcia, Christopher A.

    1994-03-01

    Automated target recognition (ATR) software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army's Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multisensor system designed to detect buried and surface- emplaced metallic and nonmetallic antitank mines. The basic requirements for this ATR software were the following: (1) an ability to separate target objects from the background in low signal-noise conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed using an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics, which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a tradeoff between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  13. Content Based Video Retrieval Systems

    Directory of Open Access Journals (Sweden)

    B V Patel

    2012-05-01

    Full Text Available With the development of multimedia data types and available bandwidth there is huge demand of video retrieval systems, as users shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval regardless of video attributes being under consideration. These features are intended for selecting, indexing and ranking according to their potential interest to the user. Good features selection also allows the time and space costs of the retrieval process to be reduced. This survey reviews the interesting features that can beextracted from video data for indexing and retrieval along with similarity measurement methods. We also identify present research issues in area of content based video retrieval systems.

  14. Automated Recognition of 3D Features in GPIR Images

    Science.gov (United States)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  15. Automated detection of open magnetic field regions in EUV images

    Science.gov (United States)

    Krista, Larisza Diana; Reinard, Alysha

    2016-05-01

    Open magnetic regions on the Sun are either long-lived (coronal holes) or transient (dimmings) in nature, but both appear as dark regions in EUV images. For this reason their detection can be done in a similar way. As coronal holes are often large and long-lived in comparison to dimmings, their detection is more straightforward. The Coronal Hole Automated Recognition and Monitoring (CHARM) algorithm detects coronal holes using EUV images and a magnetogram. The EUV images are used to identify dark regions, and the magnetogam allows us to determine if the dark region is unipolar – a characteristic of coronal holes. There is no temporal sensitivity in this process, since coronal hole lifetimes span days to months. Dimming regions, however, emerge and disappear within hours. Hence, the time and location of a dimming emergence need to be known to successfully identify them and distinguish them from regular coronal holes. Currently, the Coronal Dimming Tracker (CoDiT) algorithm is semi-automated – it requires the dimming emergence time and location as an input. With those inputs we can identify the dimming and track it through its lifetime. CoDIT has also been developed to allow the tracking of dimmings that split or merge – a typical feature of dimmings.The advantage of these particular algorithms is their ability to adapt to detecting different types of open field regions. For coronal hole detection, each full-disk solar image is processed individually to determine a threshold for the image, hence, we are not limited to a single pre-determined threshold. For dimming regions we also allow individual thresholds for each dimming, as they can differ substantially. This flexibility is necessary for a subjective analysis of the studied regions. These algorithms were developed with the goal to allow us better understand the processes that give rise to eruptive and non-eruptive open field regions. We aim to study how these regions evolve over time and what environmental

  16. Automated microaneurysm detection algorithms applied to diabetic retinopathy retinal images

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2013-07-01

    Full Text Available Diabetic retinopathy is the commonest cause of blindness in working age people. It is characterised and graded by the development of retinal microaneurysms, haemorrhages and exudates. The damage caused by diabetic retinopathy can be prevented if it is treated in its early stages. Therefore, automated early detection can limit the severity of the disease, improve the follow-up management of diabetic patients and assist ophthalmologists in investigating and treating the disease more efficiently. This review focuses on microaneurysm detection as the earliest clinically localised characteristic of diabetic retinopathy, a frequently observed complication in both Type 1 and Type 2 diabetes. Algorithms used for microaneurysm detection from retinal images are reviewed. A number of features used to extract microaneurysm are summarised. Furthermore, a comparative analysis of reported methods used to automatically detect microaneurysms is presented and discussed. The performance of methods and their complexity are also discussed.

  17. Automated Imaging and Analysis of the Hemagglutination Inhibition Assay.

    Science.gov (United States)

    Nguyen, Michael; Fries, Katherine; Khoury, Rawia; Zheng, Lingyi; Hu, Branda; Hildreth, Stephen W; Parkhill, Robert; Warren, William

    2016-04-01

    The hemagglutination inhibition (HAI) assay quantifies the level of strain-specific influenza virus antibody present in serum and is the standard by which influenza vaccine immunogenicity is measured. The HAI assay endpoint requires real-time monitoring of rapidly evolving red blood cell (RBC) patterns for signs of agglutination at a rate of potentially thousands of patterns per day to meet the throughput needs for clinical testing. This analysis is typically performed manually through visual inspection by highly trained individuals. However, concordant HAI results across different labs are challenging to demonstrate due to analyst bias and variability in analysis methods. To address these issues, we have developed a bench-top, standalone, high-throughput imaging solution that automatically determines the agglutination states of up to 9600 HAI assay wells per hour and assigns HAI titers to 400 samples in a single unattended 30-min run. Images of the tilted plates are acquired as a function of time and analyzed using algorithms that were developed through comprehensive examination of manual classifications. Concordance testing of the imaging system with eight different influenza antigens demonstrates 100% agreement between automated and manual titer determination with a percent difference of ≤3.4% for all cases. PMID:26464422

  18. Automated transient detection in the STEREO Heliospheric Imagers.

    Science.gov (United States)

    Barnard, Luke; Scott, Chris; Owens, Mat; Lockwood, Mike; Tucker-Hood, Kim; Davies, Jackie

    2014-05-01

    Since the launch of the twin STEREO satellites, the heliospheric imagers (HI) have been used, with good results, in tracking transients of solar origin, such as Coronal Mass Ejections (CMEs), out far into the heliosphere. A frequently used approach is to build a "J-map", in which multiple elongation profiles along a constant position angle are stacked in time, building an image in which radially propagating transients form curved tracks in the J-map. From this the time-elongation profile of a solar transient can be manually identified. This is a time consuming and laborious process, and the results are subjective, depending on the skill and expertise of the investigator. Therefore, it is desirable to develop an automated algorithm for the detection and tracking of the transient features observed in HI data. This is to some extent previously covered ground, as similar problems have been encountered in the analysis of coronagraph data and have led to the development of products such as CACtus etc. We present the results of our investigation into the automated detection of solar transients observed in J-maps formed from HI data. We use edge and line detection methods to identify transients in the J-maps, and then use kinematic models of the solar transient propagation (such as the fixed-phi and harmonic mean geometric models) to estimate the solar transients properties, such as transient speed and propagation direction, from the time-elongation profile. The effectiveness of this process is assessed by comparison of our results with a set of manually identified CMEs, extracted and analysed by the Solar Storm Watch Project. Solar Storm Watch is a citizen science project in which solar transients are identified in J-maps formed from HI data and tracked multiple times by different users. This allows the calculation of a consensus time-elongation profile for each event, and therefore does not suffer from the potential subjectivity of an individual researcher tracking an

  19. Content-Based Tile Retrieval System

    Czech Academy of Sciences Publication Activity Database

    Vácha, Pavel; Haindl, Michal

    Berlin / Heidelberg : Springer Berlin / Heidelberg, 2010 - (Hancock, Edwin and Wilson, Richard and Windeatt, Terry and Ulusoy, Ilkay and Escolano, Francisco), s. 434-443 ISBN 978-3-642-14979-5. ISSN 0302-9743. - (LNCS. 6218). [Structural, Syntactic, and Statistical Pattern Recognition. Cesme, Izmir (TR), 18.08.2010-20.08.2010] R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant ostatní: GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : content based image retrieval * textural features * colour * tile classification Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2010/RO/vacha-content-based tile retrieval system.pdf

  20. Automated CT marker segmentation for image registration in radionuclide therapy

    International Nuclear Information System (INIS)

    In this paper a novel, automated CT marker segmentation technique for image registration is described. The technique, which is based on analysing each CT slice contour individually, treats the cross sections of the external markers as protrusions of the slice contour. Knowledge-based criteria, using the shape and dimensions of the markers, are defined to enable marker identification and segmentation. Following segmentation, the three-dimensional (3D) markers' centroids are localized using an intensity-weighted algorithm. Finally, image registration is performed using a least-squares fit algorithm. The technique was applied to both simulated and patient studies. The patients were undergoing 131I-mIBG radionuclide therapy with each study comprising several 99mTc single photon emission computed tomography (SPECT) scans and one CT marker scan. The mean residual 3D registration errors (±1 SD) computed for the simulated and patient studies were 1.8±0.3 mm and 4.3±0.5 mm respectively. (author)

  1. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik; Ohlsson, Mattias; Valind, Sven; Loft, Annika; Edenbrandt, Lars; Kjaer, Andreas

    2007-01-01

    localization of lesions in the PET images in the feature extraction process. Eight features from each examination were used as inputs to artificial neural networks trained to classify the images. Thereafter, the performance of the network was evaluated in the test set. RESULTS: The performance of the automated......PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...... standard' image interpretation. The training group was used in the development of the automated method. The image processing techniques included algorithms for segmentation of the lungs based on the CT images and detection of lesions in the PET images. Lung boundaries from the CT images were used for...

  2. Automated movement correction for dynamic PET/CT images: Evaluation with phantom and patient data

    OpenAIRE

    Ye, H.; Wong, KP; Wardak, M; Dahlbom, M.; Kepe, V; Barrio, JR; Nelson, LD; Small, GW; Huang, SC

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed th...

  3. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    OpenAIRE

    Mohendra Roy; Dongmin Seo; Sangwoo Oh; Yeonghun Chae; Myung-Hyun Nam; Sungkyu Seo

    2016-01-01

    Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al.), we developed a lens-free imaging system using low-cost components. This sy...

  4. Grades computacionais na recuperação de imagens médicas baseada em conteúdo Grid computing in the optimization of content-based medical images retrieval

    Directory of Open Access Journals (Sweden)

    Marcelo Costa Oliveira

    2007-08-01

    outra forma seria limitado a supercomputadores.OBJECTIVE: To utilize the grid computing technology to enable the utilization of a similarity measurement algorithm for content-based medical image retrieval. MATERIALS AND METHODS: The content-based images retrieval technique is comprised of two sequential steps: texture analysis and similarity measurement algorithm. These steps have been adopted for head and knee images for evaluation of accuracy in the retrieval of images of a single plane and acquisition sequence in a databank with 2,400 medical images. Initially, texture analysis was utilized as a preselection resource to obtain a set of the 1,000 most similar images as compared with a reference image selected by a clinician. Then, these 1,000 images were processed utilizing a similarity measurement algorithm on a computational grid. RESULTS: The texture analysis has demonstrated low accuracy for sagittal knee images (0.54 and axial head images (0.40. Nevertheless, this technique has shown effectiveness as a filter, pre-selecting images to be evaluated by the similarity measurement algorithm. Content-based images retrieval with similarity measurement algorithm applied on these pre-selected images has demonstrated satisfactory accuracy - 0.95 for sagittal knee images, and 0.92 for axial head images. The high computational cost of the similarity measurement algorithm was balanced by the utilization of grid computing. CONCLUSION: The approach combining texture analysis and similarity measurement algorithm for content-based images retrieval resulted in an accuracy of > 90%. Grid computing has shown to be essential for the utilization of similarity measurement algorithm in the content-based images retrieval that otherwise would be limited to supercomputers.

  5. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  6. A Framework for Content-based Retrieval of EEG with Applications to Neuroscience and Beyond*

    OpenAIRE

    Su, Kyungmin; Robbins, Kay A.

    2013-01-01

    This paper introduces a prototype framework for content-based EEG retrieval (CBER). Like content-based image retrieval, the proposed framework retrieves EEG segments similar to the query EEG segment in a large database.

  7. Automated Detection of Firearms and Knives in a CCTV Image

    Directory of Open Access Journals (Sweden)

    Michał Grega

    2016-01-01

    Full Text Available Closed circuit television systems (CCTV are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  8. Automated Detection of Firearms and Knives in a CCTV Image.

    Science.gov (United States)

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims. PMID:26729128

  9. Exploiting image registration for automated resonance assignment in NMR

    International Nuclear Information System (INIS)

    Analysis of protein NMR data involves the assignment of resonance peaks in a number of multidimensional data sets. To establish resonance assignment a three-dimensional search is used to match a pair of common variables, such as chemical shifts of the same spin system, in different NMR spectra. We show that by displaying the variables to be compared in two-dimensional plots the process can be simplified. Moreover, by utilizing a fast Fourier transform cross-correlation algorithm, more common to the field of image registration or pattern matching, we can automate this process. Here, we use sequential NMR backbone assignment as an example to show that the combination of correlation plots and segmented pattern matching establishes fast backbone assignment in fifteen proteins of varying sizes. For example, the 265-residue RalBP1 protein was 95.4 % correctly assigned in 10 s. The same concept can be applied to any multidimensional NMR data set where analysis comprises the comparison of two variables. This modular and robust approach offers high efficiency with excellent computational scalability and could be easily incorporated into existing assignment software

  10. Automated image analysis of atomic force microscopy images of rotavirus particles

    International Nuclear Information System (INIS)

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM

  11. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  12. Research and Development of the Content-Based Image Retrieval Technology%基于内容的图象检索技术的研究和发展

    Institute of Scientific and Technical Information of China (English)

    王文惠; 周良柱; 万建伟

    2001-01-01

    With the development of the technology of multimedia and digital library,content-based image retrieval has become a key problem of image processing and computer vision. Image database indexing technology can achieve retrieving image automatically and intelligently. This paper introduces the state-of-the-art of the research and application in detail. In the end,It discusses perspective of the technology.%多媒体技术和数字图书馆的发展和应用,使基于图象内容的检索技术,成为图象处理和计算机视觉的前沿问题。图象数据库检索查询的研究目的就是实现自动地、智能化地检索和管理图象。文章详细介绍了该技术的研究状况和具体应用,并探讨了其发展前景。

  13. Performance and Analysis of the Automated Semantic Object and Spatial Relationships Extraction in Traffic Images

    OpenAIRE

    Wang Hui Hui

    2013-01-01

    Extraction and representation of spatial relations semantics among objects are important as it can convey important information about the image and to further increase the confidence in image understanding which contributes to richer querying and retrieval facilities. This paper discusses the performance of the automated object spatial relationships semantic information extraction as proposed. Experiments have been conducted to demonstrate that the proposed automated object spatial relations...

  14. Use of automated image registration to generate mean brain SPECT image of Alzheimer's patients

    International Nuclear Information System (INIS)

    The purpose of this study was to compute and compare the group mean HMPAO brain SPECT images of patients with senile dementia of Alzheimer's type (SDAT) and age matched control subjects after transformation of the individual images to a standard size and shape. Ten patients with Alzheimer's disease (age 71.6±5.0 yr) and ten age matched normal subjects (age 71.0±6.1 yr) participated in this study. Tc-99m HMPAO brain SPECT and X-ray CT scans were acquired for each subject. SPECT images were normalized to an average activity of 100 counts/pixel. Individual brain images were transformed to a standard size and shape with the help of Automated Image Registration (AIR). Realigned brain SPECT images of both groups were used to generate mean and standard deviation images by arithmetic operations on voxel based numerical values. Mean images of both groups were compared by applying the unpaired t-test on a voxel by voxel basis to generate three dimensional T-maps. X-ray CT images of individual subjects were evaluated by means of a computer program for brain atrophy. A significant decrease in relative radioisotope (RI) uptake was present in the bilateral superior and inferior parietal lobules (p<0.05), bilateral inferior temporal gyri, and the bilateral superior and middle frontal gyri (p<0.001). The mean brain atrophy indices for patients and normal subjects were 0.853±0.042 and 0.933±0.017 respectively, the difference being statistically significant (p<0.001). The use of a brain image standardization procedure increases the accuracy of voxel based group comparisons. Thus, intersubject averaging enhances the capacity for detection of abnormalities in functional brain images by minimizing the influence of individual variation. (author)

  15. Imaging Automation and Volume Tomographic Visualization at Texas Neutron Imaging Facility

    International Nuclear Information System (INIS)

    A thermal neutron imaging facility for real-time neutron radiography and computed tomography has been developed at the University of Texas reactor. The facility produced good-quality radiographs and two-dimensional tomograms. Further developments have been recently accomplished. A computer software has been developed to automate and expedite the data acquisition and reconstruction processes. Volume tomographic visualization using Interactive Data Language (IDL) software has been demonstrated and will be further developed. Volume tomography provides the additional flexibility of producing slices of the object using software and thus avoids redoing the measurements

  16. Imaging automation and volume tomographic visualization at Texas Neutron Imaging Facility

    International Nuclear Information System (INIS)

    A thermal neutron imaging facility for real-time neutron radiography and computed tomography has been developed at the University of Texas reactor. The facility produced a good-quality radiographs and two-dimensional tomograms. Further developments have been recently accomplished. Further developments have been recently accomplished. A computer software has been developed to automate and expedite the data acquisition and reconstruction processes. Volume tomographic visualization using Interactive Data Language (IDL) software has been demonstrated and will be further developed. Volume tomography provides the additional flexibility of producing slices of the object using software and thus avoids redoing the measurements

  17. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    Science.gov (United States)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  18. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    International Nuclear Information System (INIS)

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools. (paper)

  19. Automated Algorithm for Carotid Lumen Segmentation and 3D Reconstruction in B-mode images

    OpenAIRE

    Jorge M. S. Pereira; João Manuel R. S. Tavares

    2011-01-01

    The B-mode image system is one of the most popular systems used in the medical area; however it imposes several difficulties in the image segmentation process due to low contrast and noise. Although these difficulties, this image mode is often used in the study and diagnosis of the carotid artery diseases.In this paper, it is described the a novel automated algorithm for carotid lumen segmentation and 3-D reconstruction in B- mode images.

  20. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    Science.gov (United States)

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  1. AMIsurvey, chimenea and other tools: Automated imaging for transient surveys with existing radio-observatories

    CERN Document Server

    Staley, Tim D

    2015-01-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, making use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. These packages...

  2. Automated defect recognition method based on neighbor layer slice images of ICT

    International Nuclear Information System (INIS)

    The current automated defect recognition of industrial computerized tomography(ICT) slice images is mostly carried out in individual image. Certain false detections would exist for some isolated noises can not be wiped off without considering the information of neighbor layer images. To solve this problem,a new automated defect recognition method is presented based on a two-step analysis of consecutive slice images. First, all potential defects are segmented using a classic method in each image. Second, real defects and false defects are recognized by all potential defect matching of neighbor layer images in two steps based on the continuity of real defects characteristic and the non-continuity of false defects between the neighbor images. The method is verified by experiments and results prove that the real defects can be detected with high probability and false detections can be reduced effectively. (authors)

  3. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    Science.gov (United States)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  4. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    International Nuclear Information System (INIS)

    Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development

  5. Automated striatal uptake analysis of 18F-FDOPA PET images applied to Parkinson's disease patients

    International Nuclear Information System (INIS)

    6-[18F]Fluoro-L-DOPA (FDOPA) is a radiopharmaceutical valuable for assessing the presynaptic dopaminergic function when used with positron emission tomography (PET). More specifically, the striatal-to-occipital ratio (SOR) of FDOPA uptake images has been extensively used as a quantitative parameter in these PET studies. Our aim was to develop an easy, automated method capable of performing objective analysis of SOR in FDOPA PET images of Parkinson's disease (PD) patients. Brain images from FDOPA PET studies of 21 patients with PD and 6 healthy subjects were included in our automated striatal analyses. Images of each individual were spatially normalized into an FDOPA template. Subsequently, the image slice with the highest level of basal ganglia activity was chosen among the series of normalized images. Also, the immediate preceding and following slices of the chosen image were then selected. Finally, the summation of these three images was used to quantify and calculate the SOR values. The results obtained by automated analysis were compared with manual analysis by a trained and experienced image processing technologist. The SOR values obtained from the automated analysis had a good agreement and high correlation with manual analysis. The differences in caudate, putamen, and striatum were -0.023, -0.029, and -0.025, respectively; correlation coefficients 0.961, 0.957, and 0.972, respectively. We have successfully developed a method for automated striatal uptake analysis of FDOPA PET images. There was no significant difference between the SOR values obtained from this method and using manual analysis. Yet it is an unbiased time-saving and cost-effective program and easy to implement on a personal computer. (author)

  6. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology.

    Science.gov (United States)

    Roy, Mohendra; Seo, Dongmin; Oh, Sangwoo; Chae, Yeonghun; Nam, Myung-Hyun; Seo, Sungkyu

    2016-01-01

    Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al.), we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings. PMID:27164146

  7. A SYSTEM FOR ACCESSING A COLLECTION OF HISTOLOGY IMAGES USING CONTENT-BASED STRATEGIES Sistema para acceder una colección de imágenes histológicas mediante estrategias basadas en el contenido

    Directory of Open Access Journals (Sweden)

    F GONZÁLEZ

    Full Text Available Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues. Making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic comunity. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.coLas imágenes histológicas son un importante recurso para la investigación, la educación y la práctica médica. La disponibilidad de imágenes individuales o colecciones de imágenes de referencia está limitada a formatos impresos como libros y revistas científicas. En aquellos casos en donde se publican conjuntos de imágenes digitales, éstos están compuestos por algunas cuantas decenas de imágenes que no representan la gran diversidad de estructuras biológicas que pueden encontrarse en los tejidos fundamentales. Contar con una completa colección de im

  8. A novel automated image analysis method for accurate adipocyte quantification

    OpenAIRE

    Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...

  9. Automated synthesis of image processing procedures using AI planning techniques

    Science.gov (United States)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  10. Automated examination notification of Emergency Department images in a picture archiving and communication system

    OpenAIRE

    Andriole, Katherine P.; Avrin, David E.; Weber, Ellen; Luth, David M.; Bazzill, Todd M.

    2001-01-01

    This study compares the timeliness of radiology interpretation of Emergency Department (ED) imaging examinations in a picture archiving and communication system (PACS) before and after implementation of an automated paging system for notification of image availability. An alphanumeric pager for each radiology subspecialty (chest, pediatrics, bone, neuroradiology, and body) was used to alert the responsible radiologist that an ED imaging examination is available to be viewed on the PACS. The p...

  11. Automated Analysis of Fluorescence Microscopy Images to Identify Protein-Protein Interactions

    OpenAIRE

    Morrell-Falvey, J. L.; Qi, H.; Doktycz, M. J.; Venkatraman, S.

    2006-01-01

    The identification of protein interactions is important for elucidating biological networks. One obstacle in comprehensive interaction studies is the analyses of large datasets, particularly those containing images. Development of an automated system to analyze an image-based protein interaction dataset is needed. Such an analysis system is described here, to automatically extract features from fluorescence microscopy images obtained from a bacterial protein interaction assay. These features ...

  12. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    OpenAIRE

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.

    2007-01-01

    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based ...

  13. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    Science.gov (United States)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  14. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. PMID:23465523

  15. Automative Multi Classifier Framework for Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    R. Edbert Rajan

    2015-04-01

    Full Text Available Medical image processing is the technique used to create images of the human body for medical purposes. Nowadays, medical image processing plays a major role and a challenging solution for the critical stage in the medical line. Several researches have done in this area to enhance the techniques for medical image processing. However, due to some demerits met by some advanced technologies, there are still many aspects that need further development. Existing study evaluate the efficacy of the medical image analysis with the level-set shape along with fractal texture and intensity features to discriminate PF (Posterior Fossa tumor from other tissues in the brain image. To develop the medical image analysis and disease diagnosis, to devise an automotive subjective optimality model for segmentation of images based on different sets of selected features from the unsupervised learning model of extracted features. After segmentation, classification of images is done. The classification is processed by adapting the multiple classifier frameworks in the previous work based on the mutual information coefficient of the selected features underwent for image segmentation procedures. In this study, to enhance the classification strategy, we plan to implement enhanced multi classifier framework for the analysis of medical images and disease diagnosis. The performance parameter used for the analysis of the proposed enhanced multi classifier framework for medical image analysis is Multiple Class intensity, image quality, time consumption.

  16. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. PMID:26894596

  17. Automated image analysis of lateral lumber X-rays by a form model

    International Nuclear Information System (INIS)

    Development of a software for fully automated image analysis of lateral lumbar spine X-rays. Material and method: Using the concept of active shape models, we developed a software that produces a form model of the lumbar spine from lateral lumbar spine radiographs and runs an automated image segmentation. This model is able to detect lumbar vertebrae automatically after the filtering of digitized X-ray images. The model was trained with 20 lateral lumbar spine radiographs with no pathological findings before we evaluated the software with 30 further X-ray images which were sorted by image quality ranging from one (best) to three (worst). There were 10 images for each quality. Results: Image recognition strongly depended on image quality. In group one 52 and in group two 51 out of 60 vertebral bodies including the sacrum were recognized, but in group three only 18 vertebral bodies were properly identified. Conclusion: Fully automated and reliable recognition of vertebral bodies from lateral spine radiographs using the concept of active shape models is possible. The precision of this technique is limited by the superposition of different structures. Further improvements are necessary. Therefore standardized image quality and enlargement of the training data set are required. (orig.)

  18. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data

    International Nuclear Information System (INIS)

    The aim of the study was to evaluate a new method for automated definition of a center lumen line in vessels in cardiovascular image data. This method, called VAMPIRE, is based on improved detection of vessel-like structures. A multiobserver evaluation study was conducted involving 40 tracings in clinical CTA data of carotid arteries to compare VAMPIRE with an established technique. This comparison showed that VAMPIRE yields considerably more successful tracings and improved handling of stenosis, calcifications, multiple vessels, and nearby bone structures. We conclude that VAMPIRE is highly suitable for automated definition of center lumen lines in vessels in cardiovascular image data. (orig.)

  19. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data

    Energy Technology Data Exchange (ETDEWEB)

    Gratama van Andel, Hugo A.F. [Erasmus MC-University Medical Center Rotterdam, Department of Medical Informatics, Rotterdam (Netherlands); Erasmus MC-University Medical Center Rotterdam, Department of Radiology, Rotterdam (Netherlands); Academic Medical Centre-University of Amsterdam, Department of Medical Physics, Amsterdam (Netherlands); Meijering, Erik; Vrooman, Henri A.; Stokking, Rik [Erasmus MC-University Medical Center Rotterdam, Department of Medical Informatics, Rotterdam (Netherlands); Erasmus MC-University Medical Center Rotterdam, Department of Radiology, Rotterdam (Netherlands); Lugt, Aad van der; Monye, Cecile de [Erasmus MC-University Medical Center Rotterdam, Department of Radiology, Rotterdam (Netherlands)

    2006-02-01

    The aim of the study was to evaluate a new method for automated definition of a center lumen line in vessels in cardiovascular image data. This method, called VAMPIRE, is based on improved detection of vessel-like structures. A multiobserver evaluation study was conducted involving 40 tracings in clinical CTA data of carotid arteries to compare VAMPIRE with an established technique. This comparison showed that VAMPIRE yields considerably more successful tracings and improved handling of stenosis, calcifications, multiple vessels, and nearby bone structures. We conclude that VAMPIRE is highly suitable for automated definition of center lumen lines in vessels in cardiovascular image data. (orig.)

  20. Histogram analysis with automated extraction of brain-tissue region from whole-brain CT images

    OpenAIRE

    Kondo, Masatoshi; Yamashita, Koji; Yoshiura, Takashi; Hiwatash, Akio; Shirasaka, Takashi; Arimura, Hisao; Nakamura, Yasuhiko; Honda, Hiroshi

    2015-01-01

    To determine whether an automated extraction of the brain-tissue region from CT images is useful for the histogram analysis of the brain-tissue region was studied. We used the CT images of 11 patients. We developed an automatic brain-tissue extraction algorithm. We evaluated the similarity index of this automated extraction method relative to manual extraction, and we compared the mean CT number of all extracted pixels and the kurtosis and skewness of the distribution of CT numbers of all ext...

  1. Automated analysis of protein subcellular location in time series images

    OpenAIRE

    Hu, Yanhua; Osuna-Highley, Elvira; Hua, Juchang; Nowicki, Theodore Scott; Stolz, Robert; McKayle, Camille; Murphy, Robert F.

    2010-01-01

    Motivation: Image analysis, machine learning and statistical modeling have become well established for the automatic recognition and comparison of the subcellular locations of proteins in microscope images. By using a comprehensive set of features describing static images, major subcellular patterns can be distinguished with near perfect accuracy. We now extend this work to time series images, which contain both spatial and temporal information. The goal is to use temporal features to improve...

  2. Automated quadrilateral mesh generation for digital image structures

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    With the development of advanced imaging technology, digital images are widely used. This paper proposes an automatic quadrilateral mesh generation algorithm for multi-colour imaged structures. It takes an original arbitrary digital image as an input for automatic quadrilateral mesh generation, this includes removing the noise, extracting and smoothing the boundary geometries between different colours, and automatic all-quad mesh generation with the above boundaries as constraints. An application example is...

  3. Improved automated synthesis and preliminary animal PET/CT imaging of 11C-acetate

    International Nuclear Information System (INIS)

    To study a simple and rapid automated synthetic technology of 11C-acetate (11C- AC), automated synthesis of 11C-AC was performed by carboxylation of MeMgBr/tetrahydrofuran (THF) on a polyethylene loop with 11C-CO2, followed by hydrolysis and purification on solid-phase extraction cartridges using a 11C-Choline/Methionine synthesizer made in China. A high and reproducible radiochemical yield of above 40% (decay corrected) was obtained within the whole synthesis time about 8 min from 11C-CO2. The radiochemical purity of 11C-AC was over 95%. The novel, simple and rapid on-column hydrolysis-purification procedure should adaptable to the fully automated synthesis of 11C-AC at several commercial synthesis module. 11C-AC injection produced by the automated procedure is safe and effective, and can be used for PET imaging of animals and humans. (authors)

  4. Automated method and system for the alignment and correlation of images from two different modalities

    Science.gov (United States)

    Giger, Maryellen L.; Chen, Chin-Tu; Armato, Samuel; Doi, Kunio

    1999-10-26

    A method and system for the computerized registration of radionuclide images with radiographic images, including generating image data from radiographic and radionuclide images of the thorax. Techniques include contouring the lung regions in each type of chest image, scaling and registration of the contours based on location of lung apices, and superimposition after appropriate shifting of the images. Specific applications are given for the automated registration of radionuclide lungs scans with chest radiographs. The method in the example given yields a system that spatially registers and correlates digitized chest radiographs with V/Q scans in order to correlate V/Q functional information with the greater structural detail of chest radiographs. Final output could be the computer-determined contours from each type of image superimposed on any of the original images, or superimposition of the radionuclide image data, which contains high activity, onto the radiographic chest image.

  5. Content-based management service for medical videos.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre

    2013-01-01

    Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access. PMID:23270313

  6. Comparison of semi-automated image analysis and manual methods for tissue quantification in pancreatic carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Sims, A.J. [Regional Medical Physics Department, Freeman Hospital, Newcastle upon Tyne (United Kingdom)]. E-mail: a.j.sims@newcastle.ac.uk; Murray, A. [Regional Medical Physics Department, Freeman Hospital, Newcastle upon Tyne (United Kingdom); Bennett, M.K. [Department of Histopathology, Newcastle upon Tyne Hospitals NHS Trust, Newcastle upon Tyne (United Kingdom)

    2002-04-21

    Objective measurements of tissue area during histological examination of carcinoma can yield valuable prognostic information. However, such measurements are not made routinely because the current manual approach is time consuming and subject to large statistical sampling error. In this paper, a semi-automated image analysis method for measuring tissue area in histological samples is applied to the measurement of stromal tissue, cell cytoplasm and lumen in samples of pancreatic carcinoma and compared with the standard manual point counting method. Histological samples from 26 cases of pancreatic carcinoma were stained using the sirius red, light-green method. Images from each sample were captured using two magnifications. Image segmentation based on colour cluster analysis was used to subdivide each image into representative colours which were classified manually into one of three tissue components. Area measurements made using this technique were compared to corresponding manual measurements and used to establish the comparative accuracy of the semi-automated image analysis technique, with a quality assurance study to measure the repeatability of the new technique. For both magnifications and for each tissue component, the quality assurance study showed that the semi-automated image analysis algorithm had better repeatability than its manual equivalent. No significant bias was detected between the measurement techniques for any of the comparisons made using the 26 cases of pancreatic carcinoma. The ratio of manual to semi-automatic repeatability errors varied from 2.0 to 3.6. Point counting would need to be increased to be between 400 and 1400 points to achieve the same repeatability as for the semi-automated technique. The results demonstrate that semi-automated image analysis is suitable for measuring tissue fractions in histological samples prepared with coloured stains and is a practical alternative to manual point counting. (author)

  7. Comparison of semi-automated image analysis and manual methods for tissue quantification in pancreatic carcinoma

    International Nuclear Information System (INIS)

    Objective measurements of tissue area during histological examination of carcinoma can yield valuable prognostic information. However, such measurements are not made routinely because the current manual approach is time consuming and subject to large statistical sampling error. In this paper, a semi-automated image analysis method for measuring tissue area in histological samples is applied to the measurement of stromal tissue, cell cytoplasm and lumen in samples of pancreatic carcinoma and compared with the standard manual point counting method. Histological samples from 26 cases of pancreatic carcinoma were stained using the sirius red, light-green method. Images from each sample were captured using two magnifications. Image segmentation based on colour cluster analysis was used to subdivide each image into representative colours which were classified manually into one of three tissue components. Area measurements made using this technique were compared to corresponding manual measurements and used to establish the comparative accuracy of the semi-automated image analysis technique, with a quality assurance study to measure the repeatability of the new technique. For both magnifications and for each tissue component, the quality assurance study showed that the semi-automated image analysis algorithm had better repeatability than its manual equivalent. No significant bias was detected between the measurement techniques for any of the comparisons made using the 26 cases of pancreatic carcinoma. The ratio of manual to semi-automatic repeatability errors varied from 2.0 to 3.6. Point counting would need to be increased to be between 400 and 1400 points to achieve the same repeatability as for the semi-automated technique. The results demonstrate that semi-automated image analysis is suitable for measuring tissue fractions in histological samples prepared with coloured stains and is a practical alternative to manual point counting. (author)

  8. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  9. Automated, non-linear registration between 3-dimensional brain map and medical head image

    International Nuclear Information System (INIS)

    In this paper, we propose an automated, non-linear registration method between 3-dimensional medical head image and brain map in order to efficiently extract the regions of interest. In our method, input 3-dimensional image is registered into a reference image extracted from a brain map. The problems to be solved are automated, non-linear image matching procedure, and cost function which represents the similarity between two images. Non-linear matching is carried out by dividing the input image into connected partial regions, transforming the partial regions preserving connectivity among the adjacent images, evaluating the image similarity between the transformed regions of the input image and the correspondent regions of the reference image, and iteratively searching the optimal transformation of the partial regions. In order to measure the voxelwise similarity of multi-modal images, a cost function is introduced, which is based on the mutual information. Some experiments using MR images presented the effectiveness of the proposed method. (author)

  10. Automated detection of cardiac phase from intracoronary ultrasound image sequences.

    Science.gov (United States)

    Sun, Zheng; Dong, Yi; Li, Mengchan

    2015-01-01

    Intracoronary ultrasound (ICUS) is a widely used interventional imaging modality in clinical diagnosis and treatment of cardiac vessel diseases. Due to cyclic cardiac motion and pulsatile blood flow within the lumen, there exist changes of coronary arterial dimensions and relative motion between the imaging catheter and the lumen during continuous pullback of the catheter. The action subsequently causes cyclic changes to the image intensity of the acquired image sequence. Information on cardiac phases is implied in a non-gated ICUS image sequence. A 1-D phase signal reflecting cardiac cycles was extracted according to cyclical changes in local gray-levels in ICUS images. The local extrema of the signal were then detected to retrieve cardiac phases and to retrospectively gate the image sequence. Results of clinically acquired in vivo image data showed that the average inter-frame dissimilarity of lower than 0.1 was achievable with our technique. In terms of computational efficiency and complexity, the proposed method was shown to be competitive when compared with the current methods. The average frame processing time was lower than 30 ms. We effectively reduced the effect of image noises, useless textures, and non-vessel region on the phase signal detection by discarding signal components caused by non-cardiac factors. PMID:26406038

  11. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    International Nuclear Information System (INIS)

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  12. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Wells, J; Wilson, J; Zhang, Y; Samei, E; Ravin, Carl E. [Advanced Imaging Laboratories, Duke Clinical Imaging Physics Group, Department of Radiology, Duke University Medical Center, Durham, NC (United States)

    2014-06-01

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  13. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas

    OpenAIRE

    Nathan S. Alexander; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-01-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method f...

  14. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    Netten, van Jaap J.; Baal, van Jeff G.; Liu, Chanjuan; Heijden, van der Ferdi; Bus, Sicco A.

    2013-01-01

    Background: Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the ap

  15. Automated Selection of Uniform Regions for CT Image Quality Detection

    CERN Document Server

    Naeemi, Maitham D; Roychodhury, Sohini

    2016-01-01

    CT images are widely used in pathology detection and follow-up treatment procedures. Accurate identification of pathological features requires diagnostic quality CT images with minimal noise and artifact variation. In this work, a novel Fourier-transform based metric for image quality (IQ) estimation is presented that correlates to additive CT image noise. In the proposed method, two windowed CT image subset regions are analyzed together to identify the extent of variation in the corresponding Fourier-domain spectrum. The two square windows are chosen such that their center pixels coincide and one window is a subset of the other. The Fourier-domain spectral difference between these two sub-sampled windows is then used to isolate spatial regions-of-interest (ROI) with low signal variation (ROI-LV) and high signal variation (ROI-HV), respectively. Finally, the spatial variance ($var$), standard deviation ($std$), coefficient of variance ($cov$) and the fraction of abdominal ROI pixels in ROI-LV ($\

  16. Automated and unbiased image analyses as tools in phenotypic classification of small-spored Alternaria species

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn

    2005-01-01

    often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...

  17. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    OpenAIRE

    D. Joshua Liao; Yusheng Huang; Xiaofen Xing; Hua Wang; Jian Liu; Hui Xiao; Zhuocai Wang; Xiaojun Ding; Xiangmin Xu

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM cla...

  18. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  19. Automated registration of multispectral MR vessel wall images of the carotid artery

    International Nuclear Information System (INIS)

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  20. An image-processing program for automated counting

    Science.gov (United States)

    Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.

    1996-01-01

    An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

  1. Progress in the robust automated segmentation of real cell images

    Science.gov (United States)

    Bamford, P.; Jackway, P.; Lovell, Brian

    1999-07-01

    We propose a collection of robust algorithms for the segmentation of cell images from Papanicolaou stained cervical smears (`Pap' smears). This problem is deceptively difficult and often results on laboratory datasets do not carry over to real world data. Our approach is in 3 parts. First, we segment the cytoplasm from the background using a novel method based on the Wilson and Spann multi-resolution framework. Second, we segment the nucleus from the cytoplasm using an active contour method, where the best contour is found by a global minimization method. Third, we implement a method to determine a confidence measure for the segmentation of each object. This uses a stability criterion over the regularization parameter (lambda) in the active contour. We present the results of thorough testing of the algorithms on large numbers of cell images. A database of 20,120 images is used for the segmentation tests and 18,718 images for the robustness tests.

  2. Automated Drusen Segmentation and Quantification in SD-OCT Images

    OpenAIRE

    Chen, Qiang; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Ma, Jeffrey; de Sisternes, Luis; Rubin, Daniel L.

    2013-01-01

    Spectral domain optical coherence tomography (SD-OCT) is a useful tool for the visualization of drusen, a retinal abnormality seen in patients with age-related macular degeneration (AMD); however, objective assessment of drusen is thwarted by the lack of a method to robustly quantify these lesions on serial OCT images. Here, we describe an automatic drusen segmentation method for SD-OCT retinal images, which leverages a priori knowledge of normal retinal morphology and anatomical features. Th...

  3. Automated detection of BB pixel clusters in digital fluoroscopic images

    International Nuclear Information System (INIS)

    Small ball bearings (BBs) are often used to characterize and correct for geometric distortion of x-ray image intensifiers. For quantitative applications the number of BBs required for accurate distortion correction is prohibitively large for manual detection. A method to automatically determine the BB coordinates is described. The technique consists of image segmentation, pixel coalescing and centroid calculation. The dependence of calculated BB coordinates on segmentation threshold was also evaluated and found to be within the uncertainty of measurement. (author)

  4. Validation of Supervised Automated Algorithm for Fast Quantitative Evaluation of Organ Motion on Magnetic Resonance Imaging

    International Nuclear Information System (INIS)

    Purpose: To validate a correlation coefficient template-matching algorithm applied to the supervised automated quantification of abdominal-pelvic organ motion captured on time-resolved magnetic resonance imaging. Methods and Materials: Magnetic resonance images of 21 patients across four anatomic sites were analyzed. Representative anatomic points of interest were chosen as surrogates for organ motion. The point of interest displacements across each image frame relative to baseline were quantified manually and through the use of a template-matching software tool, termed 'Motiontrack.' Automated and manually acquired displacement measures, as well as the standard deviation of intrafraction motion, were compared for each image frame and for each patient. Results: Discrepancies between the automated and manual displacements of ≥2 mm were uncommon, ranging in frequency of 0-9.7% (liver and prostate, respectively). The standard deviations of intrafraction motion measured with each method correlated highly (r = 0.99). Considerable interpatient variability in organ motion was demonstrated by a wide range of standard deviations in the liver (1.4-7.5 mm), uterus (1.1-8.4 mm), and prostate gland (0.8-2.7 mm). The automated algorithm performed successfully in all patients but 1 and substantially improved efficiency compared with manual quantification techniques (5 min vs. 60-90 min). Conclusion: Supervised automated quantification of organ motion captured on magnetic resonance imaging using a correlation coefficient template-matching algorithm was efficient, accurate, and may play an important role in off-line adaptive approaches to intrafraction motion management

  5. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images

    International Nuclear Information System (INIS)

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods

  6. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    Science.gov (United States)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  7. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  8. An Automated Platform for High-Resolution Tissue Imaging Using Nanospray Desorption Electrospray Ionization Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.; Thomas, Mathew; Carson, James P.; Laskin, Julia

    2012-10-02

    An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSI QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.

  9. Automated Classification of Glaucoma Images by Wavelet Energy Features

    Directory of Open Access Journals (Sweden)

    N.Annu

    2013-04-01

    Full Text Available Glaucoma is the second leading cause of blindness worldwide. As glaucoma progresses, more optic nerve tissue is lost and the optic cup grows which leads to vision loss. This paper compiles a systemthat could be used by non-experts to filtrate cases of patients not affected by the disease. This work proposes glaucomatous image classification using texture features within images and efficient glaucoma classification based on Probabilistic Neural Network (PNN. Energy distribution over wavelet sub bands is applied to compute these texture features. Wavelet features were obtained from the daubechies (db3, symlets (sym3, and biorthogonal (bio3.3, bio3.5, and bio3.7 wavelet filters. It uses a technique to extract energy signatures obtained using 2-D discrete wavelet transform and the energy obtained from the detailed coefficients can be used to distinguish between normal and glaucomatous images. We observedan accuracy of around 95%, this demonstrates the effectiveness of these methods.

  10. System and method for automated object detection in an image

    Science.gov (United States)

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  11. Automated detection of meteors in observed image sequence

    Science.gov (United States)

    Šimberová, Stanislava; Suk, Tomáš

    2015-12-01

    We propose a new detection technique based on statistical characteristics of images in the video sequence. These characteristics displayed in time enable to catch any bright track during the whole sequence. We applied our method to the image datacubes that are created from camera pictures of the night sky. Meteor flying through the Earth's atmosphere leaves a light trail lasting a few seconds on the sky background. We developed a special technique to recognize this event automatically in the complete observed video sequence. For further analysis leading to the precise recognition of object we suggest to apply Fourier and Hough transformations.

  12. Automation of the method gamma of comparison dosimetry images

    International Nuclear Information System (INIS)

    The objective of this work was the development of JJGAMMA application analysis software, which enables this task systematically, minimizing intervention specialist and therefore the variability due to the observer. Both benefits, allow comparison of images is done in practice with the required frequency and objectivity. (Author)

  13. Semi-automated recognition of protozoa by image analysis

    OpenAIRE

    A.L. Amaral; Baptiste, C; Pons, M. N.; Nicolau, Ana; Lima, Nelson; Ferreira, E. C.; Mota, M.; H. Vivier

    1999-01-01

    A programme was created to semi-automatically analyse protozoal digitised images. Principal Component Analysis technique was used for species identification. After data collection and mathematical treatment, a threedimensional representation was generated and several protozoa (Opercularia, Colpidium, Tetrahymena, Prorodon, Glaucoma and Trachelophyllum) species could be positively identified.

  14. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    Science.gov (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  15. Automated Detection of Contaminated Radar Image Pixels in Mountain Areas

    Institute of Scientific and Technical Information of China (English)

    LIU Liping; Qin XU; Pengfei ZHANG; Shun LIU

    2008-01-01

    In mountain areas,radar observations are often contaminated(1)by echoes from high-speed moving vehicles and(2)by point-wise ground clutter under either normal propagation(NP)or anomalous propa-gation(AP)conditions.Level II data are collected from KMTX(Salt Lake City,Utah)radar to analyze these two types of contamination in the mountain area around the Great Salt Lake.Human experts provide the"ground truth"for possible contamination of either type on each individual pixel.Common features are then extracted for contaminated pixels of each type.For example,pixels contaminated by echoes from high-speed moving vehicles are characterized by large radial velocity and spectrum width.Echoes from a moving train tend to have larger velocity and reflectivity but smaller spectrum width than those from moving vehicles on highways.These contaminated pixels are only seen in areas of large terrain gradient(in the radial direction along the radar beam).The same is true for the second type of contamination-point-wise ground clutters.Six quality control(QC)parameters are selected to quantify the extracted features.Histograms are computed for each QC parameter and grouped for contaminated pixels of each type and also for non-contaminated pixels.Based on the computed histograms,a fuzzy logical algorithm is developed for automated detection of contaminated pixels.The algorithm is tested with KMTX radar data under different(clear and rainy)weather conditions.

  16. Automated and Accurate Detection of Soma Location and Surface Morphology in Large-Scale 3D Neuron Images

    OpenAIRE

    Cheng Yan; Anan Li; Bin Zhang,; Wenxiang Ding; Qingming Luo; Hui Gong

    2013-01-01

    Automated and accurate localization and morphometry of somas in 3D neuron images is essential for quantitative studies of neural networks in the brain. However, previous methods are limited in obtaining the location and surface morphology of somas with variable size and uneven staining in large-scale 3D neuron images. In this work, we proposed a method for automated soma locating in large-scale 3D neuron images that contain relatively sparse soma distributions. This method involves three step...

  17. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, E. [Fraunhofer Institute for Production Systems and Design Technology (IPK), Berlin (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Fast, M.F.; Nill, S. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; Oelfke, U. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; German Cancer Research Center (DKFZ), Heidelberg (Germany)

    2015-09-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso {sup registered} beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  18. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    Science.gov (United States)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  19. Semi-automated segmentation of carotid artery total plaque volume from three dimensional ultrasound carotid imaging

    Science.gov (United States)

    Buchanan, D.; Gyacskov, I.; Ukwatta, E.; Lindenmaier, T.; Fenster, A.; Parraga, G.

    2012-03-01

    Carotid artery total plaque volume (TPV) is a three-dimensional (3D) ultrasound (US) imaging measurement of carotid atherosclerosis, providing a direct non-invasive and regional estimation of atherosclerotic plaque volume - the direct determinant of carotid stenosis and ischemic stroke. While 3DUS measurements of TPV provide the potential to monitor plaque in individual patients and in populations enrolled in clinical trials, until now, such measurements have been performed manually which is laborious, time-consuming and prone to intra-observer and inter-observer variability. To address this critical translational limitation, here we describe the development and application of a semi-automated 3DUS plaque volume measurement. This semi-automated TPV measurement incorporates three user-selected boundaries in two views of the 3DUS volume to generate a geometric approximation of TPV for each plaque measured. We compared semi-automated repeated measurements to manual segmentation of 22 individual plaques ranging in volume from 2mm3 to 151mm3. Mean plaque volume was 43+/-40mm3 for semi-automated and 48+/-46mm3 for manual measurements and these were not significantly different (p=0.60). Mean coefficient of variation (CV) was 12.0+/-5.1% for the semi-automated measurements.

  20. Automated segmentation of pigmented skin lesions in multispectral imaging

    International Nuclear Information System (INIS)

    The aim of this study was to develop an algorithm for the automatic segmentation of multispectral images of pigmented skin lesions. The study involved 1700 patients with 1856 cutaneous pigmented lesions, which were analysed in vivo by a novel spectrophotometric system, before excision. The system is able to acquire a set of 15 different multispectral images at equally spaced wavelengths between 483 and 951 nm. An original segmentation algorithm was developed and applied to the whole set of lesions and was able to automatically contour them all. The obtained lesion boundaries were shown to two expert clinicians, who, independently, rejected 54 of them. The 97.1% contour accuracy indicates that the developed algorithm could be a helpful and effective instrument for the automatic segmentation of skin pigmented lesions. (note)

  1. Automated Structure Detection in HRTEM Images: An Example with Graphene

    DEFF Research Database (Denmark)

    Kling, Jens; Vestergaard, Jacob Schack; Dahl, Anders Bjorholm; Hansen, Thomas Willum; Larsen, Rasmus; Wagner, Jakob Birkedal

    structure in the image. The centers of the C-hexagons are displayed as nodes. To segment the image into “pure” and “impure” regions, like areas with residual amorphous contamination or defects e.g. holes, a sliding window approach is used. The magnitude of the Fourier transformation within a window is...... tensor B-splines is employed, which is deformed by matching model grid points with the C-hexagon centers. Dependent on the Cs and defocus-settings during microscopy these centers appear either dark or bright. One ends up with a deformed hexagonal tessellation, which can easily be transformed into a...... length scale, and at the same time lattice deformations can be visualized. The method will be refined to facilitate the detection of larger defects like holes and the determination of the edge terminations....

  2. Extending and applying active appearance models for automated, high precision segmentation in different image modalities

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Fisker, Rune; Ersbøll, Bjarne Kjær

    initialization scheme is designed thus making the usage of AAMs fully automated. Using these extensions it is demonstrated that AAMs can segment bone structures in radiographs, pork chops in perspective images and the left ventricle in cardiovascular magnetic resonance images in a robust, fast and accurate...... object class description, which can be employed to rapidly search images for new object instances. The proposed extensions concern enhanced shape representation, handling of homogeneous and heterogeneous textures, refinement optimization using Simulated Annealing and robust statistics. Finally, an...

  3. Pancreas++: Automated Quantification of Pancreatic Islet Cells in Microscopy Images

    OpenAIRE

    StuartMaudsley; BronwenMartin; JenniferLFiori

    2013-01-01

    The microscopic image analysis of pancreatic Islet of Langerhans morphology is crucial for the investigation of diabetes and metabolic diseases. Besides the general size of the islet, the percentage and relative position of glucagon-containing alpha-, and insulin-containing beta-cells is also important for pathophysiological analyses, especially in rodents. Hence, the ability to identify, quantify and spatially locate peripheral, and “involuted” alpha-cells in the islet core is an important a...

  4. Automation of disbond detection in aircraft fuselage through thermal image processing

    Science.gov (United States)

    Prabhu, D. R.; Winfree, W. P.

    1992-01-01

    A procedure for interpreting thermal images obtained during the nondestructive evaluation of aircraft bonded joints is presented. The procedure operates on time-derivative thermal images and resulted in a disbond image with disbonds highlighted. The size of the 'black clusters' in the output disbond image is a quantitative measure of disbond size. The procedure is illustrated using simulation data as well as data obtained through experimental testing of fabricated samples and aircraft panels. Good results are obtained, and, except in pathological cases, 'false calls' in the cases studied appeared only as noise in the output disbond image which was easily filtered out. The thermal detection technique coupled with an automated image interpretation capability will be a very fast and effective method for inspecting bonded joints in an aircraft structure.

  5. Imaging System for the Automated Determination of Microscopical Properties in Hardened Portland Concrete

    Energy Technology Data Exchange (ETDEWEB)

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    2000-03-08

    During this CRADA, Honeywell FM and T and MoDOT personnel designed a unique scanning system (including both hardware and software) that can be used to perform an automated scan and evaluation of a concrete sample. The specific goals of the CRADA were: (1) Develop a combined system integration, image acquisition, and image analysis approach to mimic the manual scanning and evaluation process. Produce a prototype system which can: (a) automate the scanning process to improve its speed and efficiency; (b) reduce operator fatigue; and (c) improve the consistency of the evaluation process. (2) Capture and preserve the baseline knowledge used by the MoDOT experts in performing the evaluation process. At the present time, the evaluation expertise resides in two MoDOT personnel. Automation of the evaluation process will allow that knowledge to be captured, preserved, and used for training purposes. (3) Develop an approach for the image analysis which is flexible and extensible in order to accommodate the inevitable pathologies that arise in the evaluation process. Such pathologies include features such as cracks and fissures, voids filled with paste or debris, and multiple, overlapping voids. FM and T personnel used image processing, pattern recognition, and system integration skills developed for other Department of Energy applications to develop and test a prototype of an automated scanning system for concrete evaluation. MoDOT personnel provided all the basic hardware (microscope, camera, computer-controlled stage, etc.) for the prototype, supported FM and T in the acquisition of image data for software development, and provided their critical expert knowledge of the process of concrete evaluation. This combination of expertise was vital to the successful development of the prototype system.

  6. Advanced automated gain adjustments for in-vivo ultrasound imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo;

    2015-01-01

    each containing 50 frames. The scans are acquired using a recently commercialized BK3000 ultrasound scanner (BK Ultrasound, Denmark). Matching pairs of in-vivo sequences, unprocessed and processed with the proposed method were visualized side by side and evaluated by 4 radiologists for image quality....... Wilcoxon signed-rank test was then applied to the ratings provided by radiologists. The average VAS score was highly positive 12.16 (p-value: 2.09 x 10-23) favoring the gain-adjusted scans with the proposed algorithm....

  7. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo;

    2015-01-01

    tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists...... in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10−13) and estimated to be 1.01 (95% CI: 0.85; 1...

  8. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    Science.gov (United States)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  9. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    International Nuclear Information System (INIS)

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  10. Automated Peripheral Neuropathy Assessment Using Optical Imaging and Foot Anthropometry.

    Science.gov (United States)

    Siddiqui, Hafeez-U R; Spruce, Michelle; Alty, Stephen R; Dudley, Sandra

    2015-08-01

    A large proportion of individuals who live with type-2 diabetes suffer from plantar sensory neuropathy. Regular testing and assessment for the condition is required to avoid ulceration or other damage to patient's feet. Currently accepted practice involves a trained clinician testing a patient's feet manually with a hand-held nylon monofilament probe. The procedure is time consuming, labor intensive, requires special training, is prone to error, and repeatability is difficult. With the vast increase in type-2 diabetes, the number of plantar sensory neuropathy sufferers has already grown to such an extent as to make a traditional manual test problematic. This paper presents the first investigation of a novel approach to automatically identify the pressure points on a given patient's foot for the examination of sensory neuropathy via optical image processing incorporating plantar anthropometry. The method automatically selects suitable test points on the plantar surface that correspond to those repeatedly chosen by a trained podiatrist. The proposed system automatically identifies the specific pressure points at different locations, namely the toe (hallux), metatarsal heads and heel (Calcaneum) areas. The approach is generic and has shown 100% reliability on the available database used. The database consists of Chinese, Asian, African, and Caucasian foot images. PMID:26186748

  11. Automated detection of diabetic retinopathy in retinal images

    Directory of Open Access Journals (Sweden)

    Carmen Valverde

    2016-01-01

    Full Text Available Diabetic retinopathy (DR is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  12. Automated extraction of metastatic liver cancer regions from abdominal contrast CT images

    International Nuclear Information System (INIS)

    In this paper, automated extraction of metastatic liver cancer regions from abdominal contrast X-ray CT images is investigated. Because even in Japan, cases of metastatic liver cancers are increased due to recent Europeanization and/or Americanization of Japanese eating habits, development of a system for computer aided diagnosis of them is strongly expected. Our automated extraction procedure consists of following four steps; liver region extraction, density transformation for enhancement of cancer regions, segmentation for obtaining candidate cancer regions, and reduction of false positives by shape feature. Parameter values used in each step of the procedure are decided based on density and shape features of typical metastatic liver cancers. In experiments using practical 20 cases of metastatic liver tumors, it is shown that 56% of true cancers can be detected successfully from CT images by the proposed procedure. (author)

  13. Automated grading of renal cell carcinoma using whole slide imaging

    Directory of Open Access Journals (Sweden)

    Fang-Cheng Yeh

    2014-01-01

    Full Text Available Introduction: Recent technology developments have demonstrated the benefit of using whole slide imaging (WSI in computer-aided diagnosis. In this paper, we explore the feasibility of using automatic WSI analysis to assist grading of clear cell renal cell carcinoma (RCC, which is a manual task traditionally performed by pathologists. Materials and Methods: Automatic WSI analysis was applied to 39 hematoxylin and eosin-stained digitized slides of clear cell RCC with varying grades. Kernel regression was used to estimate the spatial distribution of nuclear size across the entire slides. The analysis results were correlated with Fuhrman nuclear grades determined by pathologists. Results: The spatial distribution of nuclear size provided a panoramic view of the tissue sections. The distribution images facilitated locating regions of interest, such as high-grade regions and areas with necrosis. The statistical analysis showed that the maximum nuclear size was significantly different (P < 0.001 between low-grade (Grades I and II and high-grade tumors (Grades III and IV. The receiver operating characteristics analysis showed that the maximum nuclear size distinguished high-grade and low-grade tumors with a false positive rate of 0.2 and a true positive rate of 1.0. The area under the curve is 0.97. Conclusion: The automatic WSI analysis allows pathologists to see the spatial distribution of nuclei size inside the tumors. The maximum nuclear size can also be used to differentiate low-grade and high-grade clear cell RCC with good sensitivity and specificity. These data suggest that automatic WSI analysis may facilitate pathologic grading of renal tumors and reduce variability encountered with manual grading.

  14. Automated detection of regions of interest for tissue microarray experiments: an image texture analysis

    International Nuclear Information System (INIS)

    Recent research with tissue microarrays led to a rapid progress toward quantifying the expressions of large sets of biomarkers in normal and diseased tissue. However, standard procedures for sampling tissue for molecular profiling have not yet been established. This study presents a high throughput analysis of texture heterogeneity on breast tissue images for the purpose of identifying regions of interest in the tissue for molecular profiling via tissue microarray technology. Image texture of breast histology slides was described in terms of three parameters: the percentage of area occupied in an image block by chromatin (B), percentage occupied by stroma-like regions (P), and a statistical heterogeneity index H commonly used in image analysis. Texture parameters were defined and computed for each of the thousands of image blocks in our dataset using both the gray scale and color segmentation. The image blocks were then classified into three categories using the texture feature parameters in a novel statistical learning algorithm. These categories are as follows: image blocks specific to normal breast tissue, blocks specific to cancerous tissue, and those image blocks that are non-specific to normal and disease states. Gray scale and color segmentation techniques led to identification of same regions in histology slides as cancer-specific. Moreover the image blocks identified as cancer-specific belonged to those cell crowded regions in whole section image slides that were marked by two pathologists as regions of interest for further histological studies. These results indicate the high efficiency of our automated method for identifying pathologic regions of interest on histology slides. Automation of critical region identification will help minimize the inter-rater variability among different raters (pathologists) as hundreds of tumors that are used to develop an array have typically been evaluated (graded) by different pathologists. The region of interest

  15. Performing content-based retrieval of humans using gait biometrics

    OpenAIRE

    Samangooei, Sina; Nixon, Mark

    2010-01-01

    In order to analyse surveillance video, we need to efficiently explore large datasets containing videos of walking humans. Effective analysis of such data relies on retrieval of video data which has been enriched using semantic annotations. A manual annotation process is time-consuming and prone to error due to subject bias however, at surveillance-image resolution, the human walk (their gait) can be analysed automatically. We explore the content-based retrieval of videos containing walking s...

  16. Simple Tool for Semi-automated Evaluation of Yeast Colony Images

    Czech Academy of Sciences Publication Activity Database

    Schier, Jan; Kovář, Bohumil

    Berlin: Springer, 2013 - (Fred, A.; Filipe, J.; Gamboa, H.), s. 110-125. (Communications in Computer and Information Science. 273). ISBN 978-3-642-29751-9 R&D Projects: GA TA ČR TA01010931 Institutional support: RVO:67985556 Keywords : Colony counting * Petri dish evaluation * software tool Subject RIV: JC - Computer Hardware ; Software http://library.utia.cas.cz/separaty/2012/ZS/schier-simple tool for semi-automated evaluation of yeast colony image s.pdf

  17. Characterization of the microstructure of dairy systems using automated image analysis

    OpenAIRE

    Silva, Juliana V.C.; Legland, David; Cauty, Chantal; Kolotuev, Irina; Floury, Juliane

    2015-01-01

    A sound understanding of the microstructure of dairy products is of great importance in order to predict and control their properties and final quality. The aim of this study was to develop an automated image analysis procedure to characterize the microstructure of different dairy systems. A high pressure freezing coupled with freeze-substitution (HPF-FS) protocol was applied prior to transmission electron microscopy(TEM) in order to minimize any modification of the microstructure of the dair...

  18. The impact of air pollution on the level of micronuclei measured by automated image analysis

    Czech Academy of Sciences Publication Activity Database

    Rössnerová, Andrea; Špátová, Milada; Rossner, P.; Solanský, I.; Šrám, Radim

    2009-01-01

    Roč. 669, 1-2 (2009), s. 42-47. ISSN 0027-5107 R&D Projects: GA AV ČR 1QS500390506; GA MŠk 2B06088; GA MŠk 2B08005 Institutional research plan: CEZ:AV0Z50390512 Keywords : micronuclei * binucleated cells * automated image analysis Subject RIV: DN - Health Impact of the Environment Quality Impact factor: 3.556, year: 2009

  19. Automated semantic indexing of imaging reports to support retrieval of medical images in the multimedia electronic medical record.

    Science.gov (United States)

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A; Mailhot, M

    1999-12-01

    This paper describes preliminary work evaluating automated semantic indexing of radiology imaging reports to represent images stored in the Image Engine multimedia medical record system at the University of Pittsburgh Medical Center. The authors used the SAPHIRE indexing system to automatically identify important biomedical concepts within radiology reports and represent these concepts with terms from the 1998 edition of the U.S. National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. This automated UMLS indexing was then compared with manual UMLS indexing of the same reports. Human indexing identified appropriate UMLS Metathesaurus descriptors for 81% of the important biomedical concepts contained in the report set. SAPHIRE automatically identified UMLS Metathesaurus descriptors for 64% of the important biomedical concepts contained in the report set. The overall conclusions of this pilot study were that the UMLS metathesaurus provided adequate coverage of the majority of the important concepts contained within the radiology report test set and that SAPHIRE could automatically identify and translate almost two thirds of these concepts into appropriate UMLS descriptors. Further work is required to improve both the recall and precision of this automated concept extraction process. PMID:10805018

  20. Development of a methodology for automated assessment of the quality of digitized images in mammography

    International Nuclear Information System (INIS)

    The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. The purpose of this study is to develop a computational methodology to automate the process of assessing the quality of mammography images through techniques of digital imaging processing (PDI), using an existing image processing environment (ImageJ). With the application of PDI techniques was possible to extract geometric and radiometric characteristics of the images evaluated. The evaluated parameters include spatial resolution, high-contrast detail, low contrast threshold, linear detail of low contrast, tumor masses, contrast ratio and background optical density. The results obtained by this method were compared with the results presented in the visual evaluations performed by the Health Surveillance of Minas Gerais. Through this comparison was possible to demonstrate that the automated methodology is presented as a promising alternative for the reduction or elimination of existing subjectivity in the visual assessment methodology currently in use. (author)

  1. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  2. Automated Formosat Image Processing System for Rapid Response to International Disasters

    Science.gov (United States)

    Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.

    2016-06-01

    FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  3. Automating PACS quality control with the Vanderbilt image processing enterprise resource

    Science.gov (United States)

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2012-02-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  4. An Automated System for the Detection of Stratified Squamous Epithelial Cancer Cell Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Ram Krishna Kumar

    2013-06-01

    Full Text Available Early detection of cancer disease is a difficult problem and if it is not detected in starting phase the cancer can be fatal. Current medical procedures which are used to diagnose the cancer in body partsare time taking and more laboratory work is required for them. This work is an endeavor to possible recognition of cancer cells in the body part. The process consists of image taken of the affected area and digital image processing of the images to get a morphological pattern which differentiate normal cell to cancer cell. The technique is different than visual inspection and biopsy process. Image processing enables the visualization of cellular structure with substantial resolution. The aim of the work is to exploit differences in cellular organization between cancerous and normal tissue using image processing technique, thus allowing for automated, fast and accurate diagnosis.

  5. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    Science.gov (United States)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  6. Automated construction of arterial and venous trees in retinal images.

    Science.gov (United States)

    Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K

    2015-10-01

    While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input. PMID:26636114

  7. Automated Functional Morphology Measurement Using Cardiac SPECT Images

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Seok Yoon; Ko, Seong Jin; Kang, Se Sik; Kim, Chang Soo; Kim, Jung Hoon [Dept. of Radiological Science, College of Health Sciences, Catholic University of Pusan, Pusan (Korea, Republic of)

    2012-06-15

    For the examination of nuclear medicine, myocardial scan is a good method to evaluate a hemodynamic importance of coronary heart disease. but, the automatized qualitative measurement is additionally necessary to improve the decoding efficiency. we suggests the creation of cardiac three-dimensional model and model of three-dimensional cardiac thickness as a new measurement. For the experiment, cardiac reduced cross section was obtained from SPECT. Next, the pre-process was performed and image segmentation was fulfilled by level set. for the modeling of left cardiac thickness, it was realized by applying difference equation of two-dimensional laplace equation. As the result of experiment, it was successful to measure internal wall and external wall and three-dimensional modeling was realized by coordinate. and, with laplace formula, it was successful to develop the thickness of cardiac wall. through the three-dimensional model, defects were observed easily and position of lesion was grasped rapidly by the revolution of model. The model which was developed as the support index of decoding will provide decoding information to doctor additionally and reduce the rate of false diagnosis as well as play a great role for diagnosing IHD early.

  8. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  9. Automation Aspects for the Georeferencing of Photogrammetric Aerial Image Archives in Forested Scenes

    Directory of Open Access Journals (Sweden)

    Kimmo Nurminen

    2015-02-01

    Full Text Available Photogrammetric aerial film image archives are scanned into digital form in many countries. These data sets offer an interesting source of information for scientists from different disciplines. The objective of this investigation was to contribute to the automation of a generation of 3D environmental model time series when using small-scale airborne image archives, especially in forested scenes. Furthermore, we investigated the usability of dense digital surface models (DSMs generated using these data sets as well as the uncertainty propagation of the DSMs. A key element in the automation is georeferencing. It is obvious that for images captured years apart, it is essential to find ground reference locations that have changed as little as possible. We studied a 68-year-long aerial image time series in a Finnish Karelian forestland. The quality of candidate ground locations was evaluated by comparing digital DSMs created from the images to an airborne laser scanning (ALS-originated reference DSM. The quality statistics of DSMs were consistent with the expectations; the estimated median root mean squared error for height varied between 0.3 and 2 m, indicating a photogrammetric modelling error of 0.1‰ with respect to flying height for data sets collected since the 1980s, and 0.2‰ for older data sets. The results show that of the studied land cover classes, “peatland without trees” changed the least over time and is one of the most promising candidates to serve as a location for automatic ground control measurement. Our results also highlight some potential challenges in the process as well as possible solutions. Our results indicate that using modern photogrammetric techniques, it is possible to reconstruct 3D environmental model time series using photogrammetric image archives in a highly automated way.

  10. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  11. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  12. An algorithm for automated analysis of ultrasound images to measure tendon excursion in vivo.

    Science.gov (United States)

    Lee, Sabrina S M; Lewis, Gregory S; Piazza, Stephen J

    2008-02-01

    The accuracy of an algorithm for the automated tracking of tendon excursion from ultrasound images was tested in three experiments. Because the automated method could not be tested against direct measurements of tendon excursion in vivo, an indirect validation procedure was employed. In one experiment, a wire "phantom" was moved a known distance across the ultrasound probe and the automated tracking results were compared with the known distance. The excursion of the musculotendinous junction of the gastrocnemius during frontal and sagittal plane movement of the ankle was assessed in a single cadaver specimen both by manual tracking and with a cable extensometer sutured to the gastrocnemius muscle. A third experiment involved estimation of Achilles tendon excursion in vivo with both manual and automated tracking. Root mean squared (RMS) error was calculated between pairs of measurements after each test. Mean RMS errors of less than 1 mm were observed for the phantom experiments. For the in vitro experiment, mean RMS errors of 8-9% of the total tendon excursion were observed. Mean RMS errors of 6-8% of the total tendon excursion were found in vivo. The results indicate that the proposed algorithm accurately tracks Achilles tendon excursion, but further testing is necessary to determine its general applicability. PMID:18309186

  13. Automated collection of medical images for research from heterogeneous systems: trials and tribulations

    Science.gov (United States)

    Patel, M. N.; Looney, P.; Young, K.; Halling-Brown, M. D.

    2014-03-01

    Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. Over the past two decades both diagnostic and therapeutic imaging have undergone a rapid growth, the ability to be able to harness this large influx of medical images can provide an essential resource for research and training. Traditionally, the systematic collection of medical images for research from heterogeneous sites has not been commonplace within the NHS and is fraught with challenges including; data acquisition, storage, secure transfer and correct anonymisation. Here, we describe a semi-automated system, which comprehensively oversees the collection of both unprocessed and processed medical images from acquisition to a centralised database. The provision of unprocessed images within our repository enables a multitude of potential research possibilities that utilise the images. Furthermore, we have developed systems and software to integrate these data with their associated clinical data and annotations providing a centralised dataset for research. Currently we regularly collect digital mammography images from two sites and partially collect from a further three, with efforts to expand into other modalities and sites currently ongoing. At present we have collected 34,014 2D images from 2623 individuals. In this paper we describe our medical image collection system for research and discuss the wide spectrum of challenges faced during the design and implementation of such systems.

  14. Sfm_georef: Automating image measurement of ground control points for SfM-based projects

    Science.gov (United States)

    James, Mike R.

    2016-04-01

    Deriving accurate DEM and orthomosaic image products from UAV surveys generally involves the use of multiple ground control points (GCPs). Here, we demonstrate the automated collection of GCP image measurements for SfM-MVS processed projects, using sfm_georef software (James & Robson, 2012; http://www.lancaster.ac.uk/staff/jamesm/software/sfm_georef.htm). Sfm_georef was originally written to provide geo-referencing procedures for SfM-MVS projects. It has now been upgraded with a 3-D patch-based matching routine suitable for automating GCP image measurement in both aerial and ground-based (oblique) projects, with the aim of reducing the time required for accurate geo-referencing. Sfm_georef is compatible with a range of SfM-MVS software and imports the relevant files that describe the image network, including camera models and tie points. 3-D survey measurements of ground control are then provided, either for natural features or artificial targets distributed over the project area. Automated GCP image measurement is manually initiated through identifying a GCP position in an image by mouse click; the GCP is then represented by a square planar patch in 3-D, textured from the image and oriented parallel to the local topographic surface (as defined by the 3-D positions of nearby tie points). Other images are then automatically examined by projecting the patch into the images (to account for differences in viewing geometry) and carrying out a sub-pixel normalised cross-correlation search in the local area. With two or more observations of a GCP, its 3-D co-ordinates are then derived by ray intersection. With the 3-D positions of three or more GCPs identified, an initial geo-referencing transform can be derived to relate the SfM-MVS co-ordinate system to that of the GCPs. Then, if GCPs are symmetric and identical, image texture from one representative GCP can be used to search automatically for all others throughout the image set. Finally, the GCP observations can be

  15. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets

    Directory of Open Access Journals (Sweden)

    Charita Bhikha

    2015-01-01

    Full Text Available An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  16. ATOM - an OMERO add-on for automated import of image data

    Directory of Open Access Journals (Sweden)

    Lipp Peter

    2011-10-01

    Full Text Available Abstract Background Modern microscope platforms are able to generate multiple gigabytes of image data in a single experimental session. In a routine research laboratory workflow, these data are initially stored on the local acquisition computer from which files need to be transferred to the experimenter's (remote image repository (e.g., DVDs, portable hard discs or server-based storage because of limited local data storage. Although manual solutions for this migration, such as OMERO - a client-server software for visualising and managing large amounts of image data - exist, this import process may be a time-consuming and tedious task. Findings We have developed ATOM, a Java-based and thus platform-independent add-on for OMERO enabling automated transfer of image data from a wide variety of acquisition software packages into OMERO. ATOM provides a graphical user interface and allows pre-organisation of experimental data for the transfer. Conclusions ATOM is a convenient extension of the OMERO software system. An automated interface to OMERO will be a useful tool for scientists working with file formats supported by the Bio-Formats file format library, a platform-independent library for reading the most common file formats of microscope images.

  17. Automated quantification technology for cerebrospinal fluid dynamics based on magnetic resonance image analysis

    International Nuclear Information System (INIS)

    Time-spatial labeling inversion pulse (Time-SLIP) technology, which is a non-contrast-enhanced magnetic resonance imaging (MRI) technology for the visualization of blood flow and cerebrospinal fluid (CSF) dynamics, is used for diagnosis of neurological diseases related to CSF including idiopathic normal-pressure hydrocephalus (iNPH), one of the causes of dementia. However, physicians must subjectively evaluate the velocity of CSF dynamics through observation of Time-SLIP images because no quantification technology exists that can express the values numerically. To address this issue, Toshiba, in cooperation with Toshiba Medical Systems Corporation and Toshiba Rinkan Hospital, has developed an automated quantification technology for CSF dynamics utilizing MR image analysis. We have confirmed the effectiveness of this technology through verification tests using a water phantom and quantification experiments using images of healthy volunteers. (author)

  18. Can Automated Imaging for Optic Disc and Retinal Nerve Fiber Layer Analysis Aid Glaucoma Detection?

    Science.gov (United States)

    Banister, Katie; Boachie, Charles; Bourne, Rupert; Cook, Jonathan; Burr, Jennifer M.; Ramsay, Craig; Garway-Heath, David; Gray, Joanne; McMeekin, Peter; Hernández, Rodolfo; Azuara-Blanco, Augusto

    2016-01-01

    Purpose To compare the diagnostic performance of automated imaging for glaucoma. Design Prospective, direct comparison study. Participants Adults with suspected glaucoma or ocular hypertension referred to hospital eye services in the United Kingdom. Methods We evaluated 4 automated imaging test algorithms: the Heidelberg Retinal Tomography (HRT; Heidelberg Engineering, Heidelberg, Germany) glaucoma probability score (GPS), the HRT Moorfields regression analysis (MRA), scanning laser polarimetry (GDx enhanced corneal compensation; Glaucoma Diagnostics (GDx), Carl Zeiss Meditec, Dublin, CA) nerve fiber indicator (NFI), and Spectralis optical coherence tomography (OCT; Heidelberg Engineering) retinal nerve fiber layer (RNFL) classification. We defined abnormal tests as an automated classification of outside normal limits for HRT and OCT or NFI ≥ 56 (GDx). We conducted a sensitivity analysis, using borderline abnormal image classifications. The reference standard was clinical diagnosis by a masked glaucoma expert including standardized clinical assessment and automated perimetry. We analyzed 1 eye per patient (the one with more advanced disease). We also evaluated the performance according to severity and using a combination of 2 technologies. Main Outcome Measures Sensitivity and specificity, likelihood ratios, diagnostic, odds ratio, and proportion of indeterminate tests. Results We recruited 955 participants, and 943 were included in the analysis. The average age was 60.5 years (standard deviation, 13.8 years); 51.1% were women. Glaucoma was diagnosed in at least 1 eye in 16.8%; 32% of participants had no glaucoma-related findings. The HRT MRA had the highest sensitivity (87.0%; 95% confidence interval [CI], 80.2%–92.1%), but lowest specificity (63.9%; 95% CI, 60.2%–67.4%); GDx had the lowest sensitivity (35.1%; 95% CI, 27.0%–43.8%), but the highest specificity (97.2%; 95% CI, 95.6%–98.3%). The HRT GPS sensitivity was 81.5% (95% CI, 73.9%–87.6%), and

  19. Automated detection of galaxy-scale gravitational lenses in high resolution imaging data

    CERN Document Server

    Marshall, Philip J; Moustakas, Leonidas A; Fassnacht, Christopher D; Bradac, Marusa; Schrabback, Tim; Blandford, Roger D

    2008-01-01

    Lens modeling is the key to successful and meaningful automated strong galaxy-scale gravitational lens detection. We have implemented a lens-modeling "robot" that treats every bright red galaxy (BRG) in a large imaging survey as a potential gravitational lens system. Using a simple model optimized for "typical" galaxy-scale lenses, we generate four assessments of model quality that are used in an automated classification. The robot infers the lens classification parameter H that a human would have assigned; the inference is performed using a probability distribution generated from a human-classified training set, including realistic simulated lenses and known false positives drawn from the HST/EGS survey. We compute the expected purity, completeness and rejection rate, and find that these can be optimized for a particular application by changing the prior probability distribution for H, equivalent to defining the robot's "character." Adopting a realistic prior based on the known abundance of lenses, we find t...

  20. Automated Line Tracking of lambda-DNA for Single-Molecule Imaging

    CERN Document Server

    Guan, Juan; Granick, Steve

    2011-01-01

    We describe a straightforward, automated line tracking method to visualize within optical resolution the contour of linear macromolecules as they rearrange shape as a function of time by Brownian diffusion and under external fields such as electrophoresis. Three sequential stages of analysis underpin this method: first, "feature finding" to discriminate signal from noise; second, "line tracking" to approximate those shapes as lines; third, "temporal consistency check" to discriminate reasonable from unreasonable fitted conformations in the time domain. The automated nature of this data analysis makes it straightforward to accumulate vast quantities of data while excluding the unreliable parts of it. We implement the analysis on fluorescence images of lambda-DNA molecules in agarose gel to demonstrate its capability to produce large datasets for subsequent statistical analysis.

  1. Estimation of urinary stone composition by automated processing of CT images

    CERN Document Server

    Chevreau, Grégoire; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre; 10.1007/s00240-009-0195-3

    2009-01-01

    The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminat...

  2. Automated classification of optical coherence tomography images of human atrial tissue.

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B; Marboe, Charles C; Hendon, Christine P

    2016-10-01

    Tissue composition of the atria plays a critical role in the pathology of cardiovascular disease, tissue remodeling, and arrhythmogenic substrates. Optical coherence tomography (OCT) has the ability to capture the tissue composition information of the human atria. In this study, we developed a region-based automated method to classify tissue compositions within human atria samples within OCT images. We segmented regional information without prior information about the tissue architecture and subsequently extracted features within each segmented region. A relevance vector machine model was used to perform automated classification. Segmentation of human atrial ex vivo datasets was correlated with trichrome histology and our classification algorithm had an average accuracy of 80.41% for identifying adipose, myocardium, fibrotic myocardium, and collagen tissue compositions. PMID:26926869

  3. AUTOMATED DETECTION OF OIL DEPOTS FROM HIGH RESOLUTION IMAGES: A NEW PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    A. O. Ok

    2015-03-01

    Full Text Available This paper presents an original approach to identify oil depots from single high resolution aerial/satellite images in an automated manner. The new approach considers the symmetric nature of circular oil depots, and it computes the radial symmetry in a unique way. An automated thresholding method to focus on circular regions and a new measure to verify circles are proposed. Experiments are performed on six GeoEye-1 test images. Besides, we perform tests on 16 Google Earth images of an industrial test site acquired in a time series manner (between the years 1995 and 2012. The results reveal that our approach is capable of detecting circle objects in very different/difficult images. We computed an overall performance of 95.8% for the GeoEye-1 dataset. The time series investigation reveals that our approach is robust enough to locate oil depots in industrial environments under varying illumination and environmental conditions. The overall performance is computed as 89.4% for the Google Earth dataset, and this result secures the success of our approach compared to a state-of-the-art approach.

  4. Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images

    Science.gov (United States)

    Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel

    2016-02-01

    Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.

  5. MAGNETIC RESONANCE IMAGING COMPATIBLE ROBOTIC SYSTEM FOR FULLY AUTOMATED BRACHYTHERAPY SEED PLACEMENT

    Science.gov (United States)

    Muntener, Michael; Patriciu, Alexandru; Petrisor, Doru; Mazilu, Dumitru; Bagga, Herman; Kavoussi, Louis; Cleary, Kevin; Stoianovici, Dan

    2011-01-01

    Objectives To introduce the development of the first magnetic resonance imaging (MRI)-compatible robotic system capable of automated brachytherapy seed placement. Methods An MRI-compatible robotic system was conceptualized and manufactured. The entire robot was built of nonmagnetic and dielectric materials. The key technology of the system is a unique pneumatic motor that was specifically developed for this application. Various preclinical experiments were performed to test the robot for precision and imager compatibility. Results The robot was fully operational within all closed-bore MRI scanners. Compatibility tests in scanners of up to 7 Tesla field intensity showed no interference of the robot with the imager. Precision tests in tissue mockups yielded a mean seed placement error of 0.72 ± 0.36 mm. Conclusions The robotic system is fully MRI compatible. The new technology allows for automated and highly accurate operation within MRI scanners and does not deteriorate the MRI quality. We believe that this robot may become a useful instrument for image-guided prostate interventions. PMID:17169653

  6. An Automated Images-to-Graphs Framework for High Resolution Connectomics

    Directory of Open Access Journals (Sweden)

    William R Gray Roncal

    2015-08-01

    Full Text Available Reconstructing a map of neuronal connectivity is a critical challenge in contemporary neuroscience. Recent advances in high-throughput serial section electron microscopy (EM have produced massive 3D image volumes of nanoscale brain tissue for the first time. The resolution of EM allows for individual neurons and their synaptic connections to be directly observed. Recovering neuronal networks by manually tracing each neuronal process at this scale is unmanageable, and therefore researchers are developing automated image processing modules. Thus far, state-of-the-art algorithms focus only on the solution to a particular task (e.g., neuron segmentation or synapse identification. In this manuscript we present the first fully automated images-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction. To evaluate overall performance and select the best parameters and methods, we also develop a metric to assess the quality of the output graphs. We evaluate a set of algorithms and parameters, searching possible operating points to identify the best available brain graph for our assessment metric. Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set. This provides a baseline result and framework for community analysis and future algorithm development and testing. All code and data derivatives have been made publicly available toward eventually unlocking new biofidelic computational primitives and understanding of neuropathologies.

  7. Automated Adaptive Brightness in Wireless Capsule Endoscopy Using Image Segmentation and Sigmoid Function.

    Science.gov (United States)

    Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A

    2016-08-01

    Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule. PMID:27333609

  8. Automated segmentation and classification of multispectral magnetic resonance images of brain using artificial neural networks.

    Science.gov (United States)

    Reddick, W E; Glass, J O; Cook, E N; Elkin, T D; Deaton, R J

    1997-12-01

    We present a fully automated process for segmentation and classification of multispectral magnetic resonance (MR) images. This hybrid neural network method uses a Kohonen self-organizing neural network for segmentation and a multilayer backpropagation neural network for classification. To separate different tissue types, this process uses the standard T1-, T2-, and PD-weighted MR images acquired in clinical examinations. Volumetric measurements of brain structures, relative to intracranial volume, were calculated for an index transverse section in 14 normal subjects (median age 25 years; seven male, seven female). This index slice was at the level of the basal ganglia, included both genu and splenium of the corpus callosum, and generally, showed the putamen and lateral ventricle. An intraclass correlation of this automated segmentation and classification of tissues with the accepted standard of radiologist identification for the index slice in the 14 volunteers demonstrated coefficients (ri) of 0.91, 0.95, and 0.98 for white matter, gray matter, and ventricular cerebrospinal fluid (CSF), respectively. An analysis of variance for estimates of brain parenchyma volumes in five volunteers imaged five times each demonstrated high intrasubject reproducibility with a significance of at least p < 0.05 for white matter, gray matter, and white/gray partial volumes. The population variation, across 14 volunteers, demonstrated little deviation from the averages for gray and white matter, while partial volume classes exhibited a slightly higher degree of variability. This fully automated technique produces reliable and reproducible MR image segmentation and classification while eliminating intra- and interobserver variability. PMID:9533591

  9. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  10. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Ani eEloyan

    2012-08-01

    Full Text Available Successful automated diagnoses of attention deficit hyperactive disorder (ADHD using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions, CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  11. Experiences and achievements in automated image sequence orientation for close-range photogrammetric projects

    Science.gov (United States)

    Barazzetti, Luigi; Forlani, Gianfranco; Remondino, Fabio; Roncella, Riccardo; Scaioni, Marco

    2011-07-01

    Automatic image orientation of close-range image blocks is becoming a task of increasing importance in the practice of photogrammetry. Although image orientation procedures based on interactive tie point measurements do not require any preferential block structure, the use of structured sequences can help to accomplish this task in an automated way. Automatic orientation of image sequences has been widely investigated in the Computer Vision community. Here the method is generally named "Structure from Motion" (SfM), or "Structure and Motion". These refer to the simultaneous estimation of the image orientation parameters and 3D object points of a scene from a set of image correspondences. Such approaches, that generally disregard camera calibration data, do not ensure an accurate 3D reconstruction, which is a requirement for photogrammetric projects. The major contribution of SfM is therefore viewed in the photogrammetric community as a powerful tool to automatically provide a dense set of tie points as well as initial parameters for a final rigorous bundle adjustment. The paper, after a brief overview of automatic procedures for close-range image sequence orientation, will show some characteristic examples. Although powerful and reliable image orientation solutions are nowadays available at research level, there are certain questions that are still open. Thus the paper will also report some open issues, like the geometric characteristics of the sequences, scene's texture and shape, ground constraints (control points and/or free-network adjustment), feature matching techniques, outlier rejection and bundle adjustment models.

  12. Effect of image compression and scaling on automated scoring of immunohistochemical stainings and segmentation of tumor epithelium

    Directory of Open Access Journals (Sweden)

    Konsti Juho

    2012-03-01

    Full Text Available Abstract Background Digital whole-slide scanning of tissue specimens produces large images demanding increasing storing capacity. To reduce the need of extensive data storage systems image files can be compressed and scaled down. The aim of this article is to study the effect of different levels of image compression and scaling on automated image analysis of immunohistochemical (IHC stainings and automated tumor segmentation. Methods Two tissue microarray (TMA slides containing 800 samples of breast cancer tissue immunostained against Ki-67 protein and two TMA slides containing 144 samples of colorectal cancer immunostained against EGFR were digitized with a whole-slide scanner. The TMA images were JPEG2000 wavelet compressed with four compression ratios: lossless, and 1:12, 1:25 and 1:50 lossy compression. Each of the compressed breast cancer images was furthermore scaled down either to 1:1, 1:2, 1:4, 1:8, 1:16, 1:32, 1:64 or 1:128. Breast cancer images were analyzed using an algorithm that quantitates the extent of staining in Ki-67 immunostained images, and EGFR immunostained colorectal cancer images were analyzed with an automated tumor segmentation algorithm. The automated tools were validated by comparing the results from losslessly compressed and non-scaled images with results from conventional visual assessments. Percentage agreement and kappa statistics were calculated between results from compressed and scaled images and results from lossless and non-scaled images. Results Both of the studied image analysis methods showed good agreement between visual and automated results. In the automated IHC quantification, an agreement of over 98% and a kappa value of over 0.96 was observed between losslessly compressed and non-scaled images and combined compression ratios up to 1:50 and scaling down to 1:8. In automated tumor segmentation, an agreement of over 97% and a kappa value of over 0.93 was observed between losslessly compressed images and

  13. Automated system for acquisition and image processing for the control and monitoring boned nopal

    Science.gov (United States)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  14. Newly found pulmonary pathophysiology from automated breath-hold perfusion-SPECT-CT fusion image

    International Nuclear Information System (INIS)

    Pulmonary perfusion single photon emission computed tomography (SPECT)-CT fusion image largely contributes to objective and detailed correlation between lung morphologic and perfusion impairment in various lung diseases. However, traditional perfusion SPECT obtained during rest breathing usually shows a significant mis-registration on fusion image with conventional CT obtained during deep-inspiratory phase. There are also other adverse effects caused by respiratory lung motion such as blurring or smearing of small perfusion defects. To resolve these disadvantages of traditional perfusion SPECT, an innovative method of deep-inspiratory breath-hold (DIBrH) SPECT scan is developed in the Nuclear Medicine Institute of Yamaguchi University Hospital. This review article briefly describes the new findings of pulmonary pathophysiology which has been reveled by detailed lung morphologic-perfusion correlation on automated reliable DIBrH perfusion SPECT-CT fusion image. (author)

  15. PAPNET TM: an automated cytology screener using image processing and neural networks

    Science.gov (United States)

    Luck, Randall L.; Tjon-Fo-Sang, Robert; Mango, Laurie; Recht, Joel R.; Lin, Eunice; Knapp, James

    1992-04-01

    The Pap smear is the universally accepted test used for cervical cancer screening. In the United States alone, about 50 to 70 million of these test are done annually. Every one of the tests is done manually be a cytotechnologist looking at cells on a glass slide under a microscope. This paper describes PAPNET, an automated microscope system that combines a high speed image processor and a neural network processor. The image processor performs an algorithmic primary screen of each image. The neural network performs a non-algorithmic secondary classification of candidate cells. The final output of the system is not a diagnosis. Rather it is a display screen of suspicious cells from which a decision about the status of the case can be made.

  16. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya

    DEFF Research Database (Denmark)

    Juul Bøgelund Hansen, Morten; Abramoff, M. D.; Folk, J. C.;

    2015-01-01

    Objective Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased...... workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields...... gave an AUC of 0.878 (95% CI 0.850-0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment. Conclusions In this epidemiological sample, the IDP's grading was comparable to that...

  17. The use of the Kalman filter in the automated segmentation of EIT lung images

    International Nuclear Information System (INIS)

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging. (paper)

  18. An integrated and interactive decision support system for automated melanoma recognition of dermoscopic images.

    Science.gov (United States)

    Rahman, M M; Bhattacharya, P

    2010-09-01

    This paper presents an integrated and interactive decision support system for the automated melanoma recognition of the dermoscopic images based on image retrieval by content and multiple expert fusion. In this context, the ultimate aim is to support the decision making by retrieving and displaying the relevant past cases as well as predicting the image categories (e.g., melanoma, benign and dysplastic nevi) by combining outputs from different classifiers. However, the most challenging aspect in this domain is to detect the lesion from the healthy background skin and extract the lesion-specific local image features. A thresholding-based segmentation method is applied on the intensity images generated from two different schemes to detect the lesion. For the fusion-based image retrieval and classification, the lesion-specific local color and texture features are extracted and represented in the form of the mean and variance-covariance of color channels and in a combined feature space. The performance is evaluated by using both the precision-recall and classification accuracies. Experimental results on a dermoscopic image collection demonstrate the effectiveness of the proposed system and show the viability of a real-time clinical application. PMID:19942406

  19. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging

    Science.gov (United States)

    Jenkins, Cesare H.; Naczynski, Dominik J.; Yu, Shu-Jung S.; Yang, Yong; Xing, Lei

    2016-09-01

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system’s unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  20. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging.

    Science.gov (United States)

    Jenkins, Cesare H; Naczynski, Dominik J; Yu, Shu-Jung S; Yang, Yong; Xing, Lei

    2016-09-01

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system's unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  1. Contention-based forwarding for street scenarios

    OpenAIRE

    Füßler, Holger; Hartenstein, Hannes; Mauve, Martin; Effelsberg, Wolfgang; Widmer, Jörg

    2004-01-01

    In this paper, we propose to apply Contention-Based Forwarding (CBF) to Vehicular Ad Hoc Networks (VANETs). CBF is a greedy position-based forwarding algorithm that does not require proactive transmission of beacon messages. CBF performance is analyzed using realistic movement patterns of vehicles on a highway. We show by means of simulation that CBF as well as traditional position-based routing (PBR) achieve a delivery rate of almost 100 given that connectivity ...

  2. Advances in hardware, software, and automation for 193nm aerial image measurement systems

    Science.gov (United States)

    Zibold, Axel M.; Schmid, R.; Seyfarth, A.; Waechter, M.; Harnisch, W.; Doornmalen, H. v.

    2005-05-01

    A new, second generation AIMS fab 193 system has been developed which is capable of emulating lithographic imaging of any type of reticles such as binary and phase shift masks (PSM) including resolution enhancement technologies (RET) such as optical proximity correction (OPC) or scatter bars. The system emulates the imaging process by adjustment of the lithography equivalent illumination and imaging conditions of 193nm wafer steppers including circular, annular, dipole and quadrupole type illumination modes. The AIMS fab 193 allows a rapid prediction of wafer printability of critical mask features, including dense patterns and contacts, defects or repairs by acquiring through-focus image stacks by means of a CCD camera followed by quantitative image analysis. Moreover the technology can be readily applied to directly determine the process window of a given mask under stepper imaging conditions. Since data acquisition is performed electronically, AIMS in many applications replaces the need for costly and time consuming wafer prints using a wafer stepper/ scanner followed by CD SEM resist or wafer analysis. The AIMS fab 193 second generation system is designed for 193nm lithography mask printing predictability down to the 65nm node. In addition to hardware improvements a new modular AIMS software is introduced allowing for a fully automated operation mode. Multiple pre-defined points can be visited and through-focus AIMS measurements can be executed automatically in a recipe based mode. To increase the effectiveness of the automated operation mode, the throughput of the system to locate the area of interest, and to acquire the through-focus images is increased by almost a factor of two in comparison with the first generation AIMS systems. In addition a new software plug-in concept is realised for the tools. One new feature has been successfully introduced as "Global CD Map", enabling automated investigation of global mask quality based on the local determination of

  3. Automated measurement of CT noise in patient images with a novel structure coherence feature.

    Science.gov (United States)

    Chun, Minsoo; Choi, Young Hun; Kim, Jong Hyo

    2015-12-01

    While the assessment of CT noise constitutes an important task for the optimization of scan protocols in clinical routine, the majority of noise measurements in practice still rely on manual operation, hence limiting their efficiency and reliability. This study presents an algorithm for the automated measurement of CT noise in patient images with a novel structure coherence feature. The proposed algorithm consists of a four-step procedure including subcutaneous fat tissue selection, the calculation of structure coherence feature, the determination of homogeneous ROIs, and the estimation of the average noise level. In an evaluation with 94 CT scans (16 517 images) of pediatric and adult patients along with the participation of two radiologists, ROIs were placed on a homogeneous fat region at 99.46% accuracy, and the agreement of the automated noise measurements with the radiologists' reference noise measurements (PCC  =  0.86) was substantially higher than the within and between-rater agreements of noise measurements (PCCwithin  =  0.75, PCCbetween  =  0.70). In addition, the absolute noise level measurements matched closely the theoretical noise levels generated by a reduced-dose simulation technique. Our proposed algorithm has the potential to be used for examining the appropriateness of radiation dose and the image quality of CT protocols for research purposes as well as clinical routine. PMID:26561914

  4. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    Science.gov (United States)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  5. Automated segmentation of oral mucosa from wide-field OCT images (Conference Presentation)

    Science.gov (United States)

    Goldan, Ryan N.; Lee, Anthony M. D.; Cahill, Lucas; Liu, Kelly; MacAulay, Calum; Poh, Catherine F.; Lane, Pierre

    2016-03-01

    Optical Coherence Tomography (OCT) can discriminate morphological tissue features important for oral cancer detection such as the presence or absence of basement membrane and epithelial thickness. We previously reported an OCT system employing a rotary-pullback catheter capable of in vivo, rapid, wide-field (up to 90 x 2.5mm2) imaging in the oral cavity. Due to the size and complexity of these OCT data sets, rapid automated image processing software that immediately displays important tissue features is required to facilitate prompt bed-side clinical decisions. We present an automated segmentation algorithm capable of detecting the epithelial surface and basement membrane in 3D OCT images of the oral cavity. The algorithm was trained using volumetric OCT data acquired in vivo from a variety of tissue types and histology-confirmed pathologies spanning normal through cancer (8 sites, 21 patients). The algorithm was validated using a second dataset of similar size and tissue diversity. We demonstrate application of the algorithm to an entire OCT volume to map epithelial thickness, and detection of the basement membrane, over the tissue surface. These maps may be clinically useful for delineating pre-surgical tumor margins, or for biopsy site guidance.

  6. Automated measurement of CT noise in patient images with a novel structure coherence feature

    International Nuclear Information System (INIS)

    While the assessment of CT noise constitutes an important task for the optimization of scan protocols in clinical routine, the majority of noise measurements in practice still rely on manual operation, hence limiting their efficiency and reliability. This study presents an algorithm for the automated measurement of CT noise in patient images with a novel structure coherence feature. The proposed algorithm consists of a four-step procedure including subcutaneous fat tissue selection, the calculation of structure coherence feature, the determination of homogeneous ROIs, and the estimation of the average noise level. In an evaluation with 94 CT scans (16 517 images) of pediatric and adult patients along with the participation of two radiologists, ROIs were placed on a homogeneous fat region at 99.46% accuracy, and the agreement of the automated noise measurements with the radiologists’ reference noise measurements (PCC  =  0.86) was substantially higher than the within and between-rater agreements of noise measurements (PCCwithin  =  0.75, PCCbetween  =  0.70). In addition, the absolute noise level measurements matched closely the theoretical noise levels generated by a reduced-dose simulation technique. Our proposed algorithm has the potential to be used for examining the appropriateness of radiation dose and the image quality of CT protocols for research purposes as well as clinical routine. (paper)

  7. Automated measurement of CT noise in patient images with a novel structure coherence feature

    Science.gov (United States)

    Chun, Minsoo; Choi, Young Hun; Hyo Kim, Jong

    2015-12-01

    While the assessment of CT noise constitutes an important task for the optimization of scan protocols in clinical routine, the majority of noise measurements in practice still rely on manual operation, hence limiting their efficiency and reliability. This study presents an algorithm for the automated measurement of CT noise in patient images with a novel structure coherence feature. The proposed algorithm consists of a four-step procedure including subcutaneous fat tissue selection, the calculation of structure coherence feature, the determination of homogeneous ROIs, and the estimation of the average noise level. In an evaluation with 94 CT scans (16 517 images) of pediatric and adult patients along with the participation of two radiologists, ROIs were placed on a homogeneous fat region at 99.46% accuracy, and the agreement of the automated noise measurements with the radiologists’ reference noise measurements (PCC  =  0.86) was substantially higher than the within and between-rater agreements of noise measurements (PCCwithin  =  0.75, PCCbetween  =  0.70). In addition, the absolute noise level measurements matched closely the theoretical noise levels generated by a reduced-dose simulation technique. Our proposed algorithm has the potential to be used for examining the appropriateness of radiation dose and the image quality of CT protocols for research purposes as well as clinical routine.

  8. A molecular scanner to automate proteomic research and to display proteome images.

    Science.gov (United States)

    Binz, P A; Müller, M; Walther, D; Bienvenut, W V; Gras, R; Hoogland, C; Bouchet, G; Gasteiger, E; Fabbretti, R; Gay, S; Palagi, P; Wilkins, M R; Rouge, V; Tonella, L; Paesano, S; Rossellat, G; Karmime, A; Bairoch, A; Sanchez, J C; Appel, R D; Hochstrasser, D F

    1999-11-01

    Identification and characterization of all proteins expressed by a genome in biological samples represent major challenges in proteomics. Today's commonly used high-throughput approaches combine two-dimensional electrophoresis (2-DE) with peptide mass fingerprinting (PMF) analysis. Although automation is often possible, a number of limitations still adversely affect the rate of protein identification and annotation in 2-DE databases: the sequential excision process of pieces of gel containing protein; the enzymatic digestion step; the interpretation of mass spectra (reliability of identifications); and the manual updating of 2-DE databases. We present a highly automated method that generates a fully annoated 2-DE map. Using a parallel process, all proteins of a 2-DE are first simultaneously digested proteolytically and electro-transferred onto a poly(vinylidene difluoride) membrane. The membrane is then directly scanned by MALDI-TOF MS. After automated protein identification from the obtained peptide mass fingerprints using PeptIdent software (http://www.expasy.ch/tools/peptident.html + ++), a fully annotated 2-D map is created on-line. It is a multidimensional representation of a proteome that contains interpreted PMF data in addition to protein identification results. This "MS-imaging" method represents a major step toward the development of a clinical molecular scanner. PMID:10565287

  9. Automated Image Segmentation And Characterization Technique For Effective Isolation And Representation Of Human Face

    Directory of Open Access Journals (Sweden)

    Rajesh Reddy N

    2014-01-01

    Full Text Available In areas such as defense and forensics, it is necessary to identify the face of the criminals from the already available database. Automated face recognition system involves face isolation, feature extraction and classification technique. Challenges in face recognition system are isolating the face effectively as it may be affected by illumination, posture and variation in skin color. Hence it is necessary to develop an effective algorithm that isolates face from the image. In this paper, advanced face isolation technique and feature extraction technique has been proposed.

  10. Automated aortic calcification detection in low-dose chest CT images

    Science.gov (United States)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  11. Using Statistical Moment Invariants and Entropy in Image Retrieval

    OpenAIRE

    Ismail I. Amr; Mohamed Amin; Passent El-Kafrawy; Sauber, Amr M.

    2010-01-01

    Although content-based image retrieval (CBIR) is not a new subject, it keeps attracting more and more attention, as the amount of images grow tremendously due to internet, inexpensive hardware and automation of image acquisition. One of the applications of CBIR is fetching images from a database. This paper presents a new method for automatic image retrieval using moment invariants and image entropy, our technique could be used to find semi or perfect matches based on query-by-example manner,...

  12. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    Science.gov (United States)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  13. Automating the Analysis of Spatial Grids A Practical Guide to Data Mining Geospatial Images for Human & Environmental Applications

    CERN Document Server

    Lakshmanan, Valliappa

    2012-01-01

    The ability to create automated algorithms to process gridded spatial data is increasingly important as remotely sensed datasets increase in volume and frequency. Whether in business, social science, ecology, meteorology or urban planning, the ability to create automated applications to analyze and detect patterns in geospatial data is increasingly important. This book provides students with a foundation in topics of digital image processing and data mining as applied to geospatial datasets. The aim is for readers to be able to devise and implement automated techniques to extract information from spatial grids such as radar, satellite or high-resolution survey imagery.

  14. AI (artificial intelligence) in histopathology--from image analysis to automated diagnosis.

    Science.gov (United States)

    Kayser, Klaus; Görtler, Jürgen; Bogovac, Milica; Bogovac, Aleksandar; Goldmann, Torsten; Vollmer, Ekkehard; Kayser, Gian

    2009-01-01

    The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all) fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures) and pixel based (texture) measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and continuous

  15. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas.

    Science.gov (United States)

    Alexander, Nathan S; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-08-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE. PMID:26309765

  16. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  17. Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition

    OpenAIRE

    Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Sahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E.; Fenyö, Eva Maria

    2014-01-01

    Background Standardized techniques to detect HIV-neutralizing antibody responses are of great importance in the search for an HIV vaccine. Methods Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay. Neutralization of virus particles is measured as a reduction in the number of fluorescent plaques, and inhibition ...

  18. Knee x-ray image analysis method for automated detection of osteoarthritis.

    Science.gov (United States)

    Shamir, Lior; Ling, Shari M; Scott, William W; Bos, Angelo; Orlov, Nikita; Macura, Tomasz J; Eckley, D Mark; Ferrucci, Luigi; Goldberg, Ilya G

    2009-02-01

    We describe a method for automated detection of radiographic osteoarthritis (OA) in knee X-ray images. The detection is based on the Kellgren-Lawrence (KL) classification grades, which correspond to the different stages of OA severity. The classifier was built using manually classified X-rays, representing the first four KL grades (normal, doubtful, minimal, and moderate). Image analysis is performed by first identifying a set of image content descriptors and image transforms that are informative for the detection of OA in the X-rays and assigning weights to these image features using Fisher scores. Then, a simple weighted nearest neighbor rule is used in order to predict the KL grade to which a given test X-ray sample belongs. The dataset used in the experiment contained 350 X-ray images classified manually by their KL grades. Experimental results show that moderate OA (KL grade 3) and minimal OA (KL grade 2) can be differentiated from normal cases with accuracy of 91.5% and 80.4%, respectively. Doubtful OA (KL grade 1) was detected automatically with a much lower accuracy of 57%. The source code developed and used in this study is available for free download at www.openmicroscopy.org. PMID:19342330

  19. Automated segmentation of murine lung tumors in x-ray micro-CT images

    Science.gov (United States)

    Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis

    2014-03-01

    Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.

  20. Automated bone segmentation from large field of view 3D MR images of the hip joint

    International Nuclear Information System (INIS)

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head–neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone–cartilage interfaces for potential cartilage segmentation. (paper)

  1. An automated segmentation method for three-dimensional carotid ultrasound images

    Science.gov (United States)

    Zahalka, Abir; Fenster, Aaron

    2001-04-01

    We have developed an automated segmentation method for three-dimensional vascular ultrasound images. The method consists of two steps: an automated initial contour identification, followed by application of a geometrically deformable model (GDM). The formation of the initial contours requires the input of a single seed point by the user, and was shown to be insensitive to the placement of the seed within a structure. The GDM minimizes contour energy, providing a smoothed final result. It requires only three simple parameters, all with easily selectable values. The algorithm is fast, performing segmentation on a 336×352×200 volume in 25 s when running on a 100 MHz 9500 Power Macintosh prototype. The segmentation algorithm was tested on stenosed vessel phantoms with known geometry, and the segmentation of the cross-sectional areas was found to be within 3% of the true area. The algorithm was also applied to two sets of patient carotid images, one acquired with a mechanical scanner and the other with a freehand scanning system, with good results on both.

  2. The objective evaluation of the phase image: a comparison of different automated methods

    International Nuclear Information System (INIS)

    Patients with suspected or proven coronary artery disease were investigated using both X-ray ventriculography and equilibrium gated radionuclide angiography. In order to diagnose regional wall motion abnormalities, the parametric images obtained by Fourier analysis of the radionuclide images were analysed by different automated methods based on the measurement of the homogeneity of the phase values within the LV ROI. The effect of a diastolic frames exclusion, smoothing the original data, weighting the phase histogram, using Bacharach's error corrected phase distribution functions, using different descriptors of the spread of the phase histograms or distribution functions were tested. (Receiver operating characteristic ROC) curves were plotted for each method. The results show that the diagnostic value of the automated methods depends mainly on the way the histograms or distribution functions are described and to a lesser extent on the type of histograms or distribution functions used. The best result is obtained after smoothing, diastolic frames exclusion, weighting the phase histogram by the amplitude and describing it by its standard deviation. Nevertheless, this result is not significantly better than the visual method. (author)

  3. Automated model-based bias field correction of MR images of the brain.

    Science.gov (United States)

    Van Leemput, K; Maes, F; Vandermeulen, D; Suetens, P

    1999-10-01

    We propose a model-based method for fully automated bias field correction of MR brain images. The MR signal is modeled as a realization of a random process with a parametric probability distribution that is corrupted by a smooth polynomial inhomogeneity or bias field. The method we propose applies an iterative expectation-maximization (EM) strategy that interleaves pixel classification with estimation of class distribution and bias field parameters, improving the likelihood of the model parameters at each iteration. The algorithm, which can handle multichannel data and slice-by-slice constant intensity offsets, is initialized with information from a digital brain atlas about the a priori expected location of tissue classes. This allows full automation of the method without need for user interaction, yielding more objective and reproducible results. We have validated the bias correction algorithm on simulated data and we illustrate its performance on various MR images with important field inhomogeneities. We also relate the proposed algorithm to other bias correction algorithms. PMID:10628948

  4. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    Directory of Open Access Journals (Sweden)

    Marcin Andrzej KUREK

    2015-01-01

    Full Text Available Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC were conducted. The particles were measured at two points: dry and after water soaking. The most significant water holding capacity (7.00 g water/g solid was achieved by the smaller sized oat fiber. Conversely, the water holding capacity was highest (4.20 g water/g solid in larger sized beet fiber. There was evidence for water absorption increasing with a decrease in particle size in regards to the same fiber source. Very strong correlations were drawn between particle shape parameters, such as fiber length, straightness, width and hydration properties measured conventionally. The regression analysis provided the opportunity to estimate whether the automated static image analysis method could be an efficient tool in describing the hydration properties of dietary fiber. The application of the method was validated using mathematical model which was verified in comparison to conventional WHC measurement results.

  5. Quantitative Assessment of Mouse Mammary Gland Morphology Using Automated Digital Image Processing and TEB Detection.

    Science.gov (United States)

    Blacher, Silvia; Gérard, Céline; Gallez, Anne; Foidart, Jean-Michel; Noël, Agnès; Péqueux, Christel

    2016-04-01

    The assessment of rodent mammary gland morphology is largely used to study the molecular mechanisms driving breast development and to analyze the impact of various endocrine disruptors with putative pathological implications. In this work, we propose a methodology relying on fully automated digital image analysis methods including image processing and quantification of the whole ductal tree and of the terminal end buds as well. It allows to accurately and objectively measure both growth parameters and fine morphological glandular structures. Mammary gland elongation was characterized by 2 parameters: the length and the epithelial area of the ductal tree. Ductal tree fine structures were characterized by: 1) branch end-point density, 2) branching density, and 3) branch length distribution. The proposed methodology was compared with quantification methods classically used in the literature. This procedure can be transposed to several software and thus largely used by scientists studying rodent mammary gland morphology. PMID:26910307

  6. Automated image analyzer for batch processing of CR-39 foils for fast neutron dosimetry

    International Nuclear Information System (INIS)

    An automated image analysis system has been developed for counting of tracks generated in CR-39 detectors after processing by Electro-chemical etching (ECE). The tracks are caused by exposure to fast neutron, and is used for measuring the neutron dose received by the radiation workers. The system is capable of batch processing a group of 20 foils in a single cycle, rendering the measurement process elegant and efficient. Thus, the system provides a marked improvement over the earlier one, which has provision of handling one foil at a time. The image analysis software of this system is empowered with the capability to resolve the overlapping tracks, which are commonly found in foils exposed to higher levels of neutron dose. The algorithm employed to resolve the tracks is an enhancement over that utilized in the earlier system. This results in improved accuracy of dosimetry. (author)

  7. Automated segmentation of wood fibres in micro-CT images of paper.

    Science.gov (United States)

    Sharma, Y; Phillion, A B; Martinez, D M

    2015-12-01

    A novel algorithm has been developed and validated to isolate individual papermaking fibres in micro-computed tomographic images of paper handsheets as a first step to characterize the structure of the paper. The three-step fibre segmentation algorithm segments the papermaking fibres by (i) tracking the hollow inside the fibres via a modified connected component methodology, (ii) extracting the fibre walls using a distance transform and (iii) labelling the fibres through collapsed sections by a final refinement step. Furthermore, postprocessing algorithms have been developed to calculate the length and coarseness of the segmented fibres. The fibre segmentation algorithm is the first ever reported method for the automated segmentation of the tortuous three-dimensional morphology of papermaking fibres within microstructural images of paper handsheets. The method is not limited to papermaking fibres, but can be applied to any material consisting of tortuous and hollow fibres. PMID:26301453

  8. Modelling and representation issues in automated feature extraction from aerial and satellite images

    Science.gov (United States)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  9. Semi-automated characterization of the γ' phase in Ni-based superalloys via high-resolution backscatter imaging

    International Nuclear Information System (INIS)

    The size distribution and volume fraction of the γ' phase in Ni-based superalloys plays a critical role in microstructural evolution and mechanical properties. Automated analysis of images is often desired for rapid quantitative characterization of these microstructures. Backscatter electron imaging of specimens in which the γ' phase has been selectively etched yields images that can be more readily segmented with image processing algorithms than other imaging techniques. Utilization of this combination of sample preparation and imaging technique allows for the rapid collection of microstructural data.

  10. Automated materials discrimination using 3D dual energy X ray images

    International Nuclear Information System (INIS)

    The ability of a human observer to identify an explosive device concealed in complex arrangements of objects routinely encountered in the 2D x-ray screening of passenger baggage at airports is often problematic. Standard dual-energy x-ray techniques enable colour encoding of the resultant images in terms of organic, inorganic and metal substances. This transmission imaging technique produces colour information computed from a high-energy x-ray signal and a low energy x-ray signal (80keVeff ≤ 13) to be automatically discriminated from many layers of overlapping substances. This is achieved by applying a basis materials subtraction technique to the data provided by a wavelet image segmentation algorithm. This imaging technique is reliant upon the image data for the masking substances to be discriminated independently of the target material. Further work investigated the extraction of depth data from stereoscopic images to estimate the mass density of the target material. A binocular stereoscopic dual-energy x-ray machine previously developed by the Vision Systems Group at The Nottingham Trent University in collaboration with The Home Office Science and Technology Group provided the image data for the empirical investigation. This machine utilises a novel linear castellated dual-energy x-ray detector recently developed by the Vision Systems Group. This detector array employs half the number of scintillator-photodiode sensors in comparison to a conventional linear dual-energy sensor. The castellated sensor required the development of an image enhancement algorithm to remove the spatial interlace effect in the resultant images prior to the calibration of the system for materials discrimination. To automate the basis materials subtraction technique a wavelet image segmentation and classification algorithm was developed. This enabled overlapping image structures in the x-rayed baggage to be partitioned. A series of experiments was conducted to investigate the

  11. Semi-automated segmentation of a glioblastoma multiforme on brain MR images for radiotherapy planning

    International Nuclear Information System (INIS)

    We propose a computerized method for semi-automated segmentation of the gross tumor volume (GTV) of a glioblastoma multiforme (GBM) on brain MR images for radiotherapy planning (RTP). Three-dimensional (3D) MR images of 28 cases with a GBM were used in this study. First, a sphere volume of interest (VOI) including the GBM was selected by clicking a part of the GBM region in the 3D image. Then, the sphere VOI was transformed to a two-dimensional (2D) image by use of a spiral-scanning technique. We employed active contour models (ACM) to delineate an optimal outline of the GBM in the transformed 2D image. After inverse transform of the optimal outline to the 3D space, a morphological filter was applied to smooth the shape of the 3D segmented region. For evaluation of our computerized method, we compared the computer output with manually segmented regions, which were obtained by a therapeutic radiologist using a manual tracking method. In evaluating our segmentation method, we employed the Jaccard similarity coefficient (JSC) and the true segmentation coefficient (TSC) in volumes between the computer output and the manually segmented region. The mean and standard deviation of JSC and TSC were 74.2±9.8% and 84.1±7.1%, respectively. Our segmentation method provided a relatively accurate outline for GBM and would be useful for radiotherapy planning. (author)

  12. An automated voxelized dosimetry tool for radionuclide therapy based on serial quantitative SPECT/CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, Price A.; Kron, Tomas [Department of Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne 3002 (Australia); Beauregard, Jean-Mathieu [Department of Radiology, Université Laval, Quebec City G1V 0A6 (Canada); Hofman, Michael S.; Hogg, Annette; Hicks, Rodney J. [Department of Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne 3002 (Australia)

    2013-11-15

    Purpose: To create an accurate map of the distribution of radiation dose deposition in healthy and target tissues during radionuclide therapy.Methods: Serial quantitative SPECT/CT images were acquired at 4, 24, and 72 h for 28 {sup 177}Lu-octreotate peptide receptor radionuclide therapy (PRRT) administrations in 17 patients with advanced neuroendocrine tumors. Deformable image registration was combined with an in-house programming algorithm to interpolate pharmacokinetic uptake and clearance at a voxel level. The resultant cumulated activity image series are comprised of values representing the total number of decays within each voxel's volume. For PRRT, cumulated activity was translated to absorbed dose based on Monte Carlo-determined voxel S-values at a combination of long and short ranges. These dosimetric image sets were compared for mean radiation absorbed dose to at-risk organs using a conventional MIRD protocol (OLINDA 1.1).Results: Absorbed dose values to solid organs (liver, kidneys, and spleen) were within 10% using both techniques. Dose estimates to marrow were greater using the voxelized protocol, attributed to the software incorporating crossfire effect from nearby tumor volumes.Conclusions: The technique presented offers an efficient, automated tool for PRRT dosimetry based on serial post-therapy imaging. Following retrospective analysis, this method of high-resolution dosimetry may allow physicians to prescribe activity based on required dose to tumor volume or radiation limits to healthy tissue in individual patients.

  13. Long-term live cell imaging and automated 4D analysis of drosophila neuroblast lineages.

    Directory of Open Access Journals (Sweden)

    Catarina C F Homem

    Full Text Available The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain.

  14. Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images

    Directory of Open Access Journals (Sweden)

    Pachiyappan Arulmozhivarman

    2012-06-01

    Full Text Available Abstract We describe a system for the automated diagnosis of diabetic retinopathy and glaucoma using fundus and optical coherence tomography (OCT images. Automatic screening will help the doctors to quickly identify the condition of the patient in a more accurate way. The macular abnormalities caused due to diabetic retinopathy can be detected by applying morphological operations, filters and thresholds on the fundus images of the patient. Early detection of glaucoma is done by estimating the Retinal Nerve Fiber Layer (RNFL thickness from the OCT images of the patient. The RNFL thickness estimation involves the use of active contours based deformable snake algorithm for segmentation of the anterior and posterior boundaries of the retinal nerve fiber layer. The algorithm was tested on a set of 89 fundus images of which 85 were found to have at least mild retinopathy and OCT images of 31 patients out of which 13 were found to be glaucomatous. The accuracy for optical disk detection is found to be 97.75%. The proposed system therefore is accurate, reliable and robust and can be realized.

  15. Prospective Image Registration for Automated Scan Prescription of Follow-up Knee Images in Quantitative Studies

    OpenAIRE

    Goldenstein, Janet; Schooler, Joseph; Crane, Jason C; Ozhinsky, Eugene; Pialat, Jean-Baptiste; Carballido-Gamio, Julio; Majumdar, Sharmila

    2011-01-01

    Consistent scan prescription for MRI of the knee is very important for accurate comparison of images in a longitudinal study. However, consistent scan region selection is difficult due the complexity of the knee joint. We propose a novel method for registering knee images using a mutual information registration algorithm to align images in a baseline and follow-up exam. The output of the registration algorithm, three translations and three Euler angles, is then used to redefine the region to ...

  16. An objective method to optimize the MR sequence set for plaque classification in carotid vessel wall images using automated image segmentation.

    Science.gov (United States)

    van 't Klooster, Ronald; Patterson, Andrew J; Young, Victoria E; Gillard, Jonathan H; Reiber, Johan H C; van der Geest, Rob J

    2013-01-01

    A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions. PMID:24194941

  17. RootAnalyzer: A Cross-Section Image Analysis Tool for Automated Characterization of Root Cells and Tissues

    OpenAIRE

    Joshua Chopin; Hamid Laga; Chun Yuan Huang; Sigrid Heuer; Miklavcic, Stanley J.

    2015-01-01

    The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processi...

  18. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  19. Automated Analysis of {sup 123}I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-03-15

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-{sup 123}I-iodophenyl)tropane ({sup 123}I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional {sup 123}I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease.

  20. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  1. Semi-automated scar detection in delayed enhanced cardiac magnetic resonance images

    Science.gov (United States)

    Morisi, Rita; Donini, Bruno; Lanconelli, Nico; Rosengarden, James; Morgan, John; Harden, Stephen; Curzen, Nick

    2015-06-01

    Late enhancement cardiac magnetic resonance images (MRI) has the ability to precisely delineate myocardial scars. We present a semi-automated method for detecting scars in cardiac MRI. This model has the potential to improve routine clinical practice since quantification is not currently offered due to time constraints. A first segmentation step was developed for extracting the target regions for potential scar and determining pre-candidate objects. Pattern recognition methods are then applied to the segmented images in order to detect the position of the myocardial scar. The database of late gadolinium enhancement (LE) cardiac MR images consists of 111 blocks of images acquired from 63 patients at the University Hospital Southampton NHS Foundation Trust (UK). At least one scar was present for each patient, and all the scars were manually annotated by an expert. A group of images (around one third of the entire set) was used for training the system which was subsequently tested on all the remaining images. Four different classifiers were trained (Support Vector Machine (SVM), k-nearest neighbor (KNN), Bayesian and feed-forward neural network) and their performance was evaluated by using Free response Receiver Operating Characteristic (FROC) analysis. Feature selection was implemented for analyzing the importance of the various features. The segmentation method proposed allowed the region affected by the scar to be extracted correctly in 96% of the blocks of images. The SVM was shown to be the best classifier for our task, and our system reached an overall sensitivity of 80% with less than 7 false positives per patient. The method we present provides an effective tool for detection of scars on cardiac MRI. This may be of value in clinical practice by permitting routine reporting of scar quantification.

  2. Automated Analysis of 123I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    International Nuclear Information System (INIS)

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-123I-iodophenyl)tropane (123I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional 123I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease

  3. Computerized detection of breast cancer on automated breast ultrasound imaging of women with dense breasts

    Energy Technology Data Exchange (ETDEWEB)

    Drukker, Karen, E-mail: kdrukker@uchicago.edu; Sennett, Charlene A.; Giger, Maryellen L. [Department of Radiology, MC2026, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2014-01-15

    Purpose: Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. Methods: The HIPAA compliant study involved a dataset of volumetric ultrasound image data, “views,” acquired with an automated U-Systems Somo•V{sup ®} ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patients (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of “marks” (detections) per view. Results: At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2—similar to radiologists’ performance sensitivity (49.9%) for this dataset from a prior reader study—and 45.9% (28/61) ± 4% for all patients. Conclusions: Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.

  4. A portable fluorescence spectroscopy imaging system for automated root phenotyping in soil cores in the field.

    Science.gov (United States)

    Wasson, Anton; Bischof, Leanne; Zwart, Alec; Watt, Michelle

    2016-02-01

    Root architecture traits are a target for pre-breeders. Incorporation of root architecture traits into new cultivars requires phenotyping. It is attractive to rapidly and directly phenotype root architecture in the field, avoiding laboratory studies that may not translate to the field. A combination of soil coring with a hydraulic push press and manual core-break counting can directly phenotype root architecture traits of depth and distribution in the field through to grain development, but large teams of people are required and labour costs are high with this method. We developed a portable fluorescence imaging system (BlueBox) to automate root counting in soil cores with image analysis software directly in the field. The lighting system was optimized to produce high-contrast images of roots emerging from soil cores. The correlation of the measurements with the root length density of the soil cores exceeded the correlation achieved by human operator measurements (R (2)=0.68 versus 0.57, respectively). A BlueBox-equipped team processed 4.3 cores/hour/person, compared with 3.7 cores/hour/person for the manual method. The portable, automated in-field root architecture phenotyping system was 16% more labour efficient, 19% more accurate, and 12% cheaper than manual conventional coring, and presents an opportunity to directly phenotype root architecture in the field as part of pre-breeding programs. The platform has wide possibilities to capture more information about root health and other root traits in the field. PMID:26826219

  5. A portable fluorescence spectroscopy imaging system for automated root phenotyping in soil cores in the field

    Science.gov (United States)

    Wasson, Anton; Bischof, Leanne; Zwart, Alec; Watt, Michelle

    2016-01-01

    Root architecture traits are a target for pre-breeders. Incorporation of root architecture traits into new cultivars requires phenotyping. It is attractive to rapidly and directly phenotype root architecture in the field, avoiding laboratory studies that may not translate to the field. A combination of soil coring with a hydraulic push press and manual core-break counting can directly phenotype root architecture traits of depth and distribution in the field through to grain development, but large teams of people are required and labour costs are high with this method. We developed a portable fluorescence imaging system (BlueBox) to automate root counting in soil cores with image analysis software directly in the field. The lighting system was optimized to produce high-contrast images of roots emerging from soil cores. The correlation of the measurements with the root length density of the soil cores exceeded the correlation achieved by human operator measurements (R 2=0.68 versus 0.57, respectively). A BlueBox-equipped team processed 4.3 cores/hour/person, compared with 3.7 cores/hour/person for the manual method. The portable, automated in-field root architecture phenotyping system was 16% more labour efficient, 19% more accurate, and 12% cheaper than manual conventional coring, and presents an opportunity to directly phenotype root architecture in the field as part of pre-breeding programs. The platform has wide possibilities to capture more information about root health and other root traits in the field. PMID:26826219

  6. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  7. Automated Detection of Galaxy-Scale Gravitational Lenses in High-Resolution Imaging Data

    Science.gov (United States)

    Marshall, Philip J.; Hogg, David W.; Moustakas, Leonidas A.; Fassnacht, Christopher D.; Bradač, Maruša; Schrabback, Tim; Blandford, Roger D.

    2009-04-01

    We expect direct lens modeling to be the key to successful and meaningful automated strong galaxy-scale gravitational lens detection. We have implemented a lens-modeling "robot" that treats every bright red galaxy (BRG) in a large imaging survey as a potential gravitational lens system. Having optimized a simple model for "typical" galaxy-scale gravitational lenses, we generate four assessments of model quality that are then used in an automated classification. The robot infers from these four data the lens classification parameter H that a human would have assigned; the inference is performed using a probability distribution generated from a human-classified training set of candidates, including realistic simulated lenses and known false positives drawn from the Hubble Space Telescope (HST) Extended Groth Strip (EGS) survey. We compute the expected purity, completeness, and rejection rate, and find that these statistics can be optimized for a particular application by changing the prior probability distribution for H; this is equivalent to defining the robot's "character." Adopting a realistic prior based on expectations for the abundance of lenses, we find that a lens sample may be generated that is ~100% pure, but only ~20% complete. This shortfall is due primarily to the oversimplicity of the model of both the lens light and mass. With a more optimistic robot, ~90% completeness can be achieved while rejecting ~90% of the candidate objects. The remaining candidates must be classified by human inspectors. Displaying the images used and produced by the robot on a custom "one-click" web interface, we are able to inspect and classify lens candidates at a rate of a few seconds per system, suggesting that a future 1000 deg2 imaging survey containing 107 BRGs, and some 104 lenses, could be successfully, and reproducibly, searched in a modest amount of time. We have verified our projected survey statistics, albeit at low significance, using the HST EGS data, discovering

  8. High-throughput automated home-cage mesoscopic functional imaging of mouse cortex

    Science.gov (United States)

    Murphy, Timothy H.; Boyd, Jamie D.; Bolaños, Federico; Vanni, Matthieu P.; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M.

    2016-01-01

    Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514

  9. Automated classification of patients with coronary artery disease using grayscale features from left ventricle echocardiographic images.

    Science.gov (United States)

    Acharya, U Rajendra; Sree, S Vinitha; Muthu Rama Krishnan, M; Krishnananda, N; Ranjan, Shetty; Umesh, Pai; Suri, Jasjit S

    2013-12-01

    Coronary Artery Disease (CAD), caused by the buildup of plaque on the inside of the coronary arteries, has a high mortality rate. To efficiently detect this condition from echocardiography images, with lesser inter-observer variability and visual interpretation errors, computer based data mining techniques may be exploited. We have developed and presented one such technique in this paper for the classification of normal and CAD affected cases. A multitude of grayscale features (fractal dimension, entropies based on the higher order spectra, features based on image texture and local binary patterns, and wavelet based features) were extracted from echocardiography images belonging to a huge database of 400 normal cases and 400 CAD patients. Only the features that had good discriminating capability were selected using t-test. Several combinations of the resultant significant features were used to evaluate many supervised classifiers to find the combination that presents a good accuracy. We observed that the Gaussian Mixture Model (GMM) classifier trained with a feature subset made up of nine significant features presented the highest accuracy, sensitivity, specificity, and positive predictive value of 100%. We have also developed a novel, highly discriminative HeartIndex, which is a single number that is calculated from the combination of the features, in order to objectively classify the images from either of the two classes. Such an index allows for an easier implementation of the technique for automated CAD detection in the computers in hospitals and clinics. PMID:23958645

  10. Portable automated imaging in complex ceramics with a microwave interference scanning system

    Science.gov (United States)

    Goitia, Ryan M.; Schmidt, Karl F.; Little, Jack R.; Ellingson, William A.; Green, William; Franks, Lisa P.

    2013-01-01

    An improved portable microwave interferometry system has been automated to permit rapid examination of components with minimal operator attendance. Functionalities include stereo and multiplexed, frequency-modulated at multiple frequencies, producing layered volumetric images of complex ceramic structures. The technique has been used to image composite ceramic armor and ceramic matrix composite components, as well as other complex dielectric materials. The system utilizes Evisive Scan microwave interference scanning technique. Validation tests include artificial and in-service damage of ceramic armor, surrogates and ceramic matrix composite samples. Validation techniques include micro-focus x-ray and computed tomography imaging. The microwave interference scanning technique has demonstrated detection of cracks, interior laminar features and variations in material properties such as density. The image yields depth information through phase angle manipulation, and shows extent of feature and relative dielectric property information. It requires access to only one surface, and no coupling medium. Data are not affected by separation of layers of dielectric material, such as outer over-wrap. Test panels were provided by the US Army Research Laboratory, and the US Army Tank Automotive Research, Development and Engineering Center (TARDEC), who with the US Air Force Research Laboratory have supported this work.

  11. An automated target recognition technique for image segmentation and scene analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baumgart, C.W.; Ciarcia, C.A.

    1994-02-01

    Automated target recognition software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army`s Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multi-sensor system designed to detect buried and surface-emplaced metallic and non-metallic anti-tank mines. The basic requirements for this ATR software were: (1) an ability to separate target objects from the background in low S/N conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed utilizing an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a trade-off between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  12. Vision 20/20: Perspectives on automated image segmentation for radiotherapy

    International Nuclear Information System (INIS)

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods’ strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology

  13. High-throughput automated home-cage mesoscopic functional imaging of mouse cortex.

    Science.gov (United States)

    Murphy, Timothy H; Boyd, Jamie D; Bolaños, Federico; Vanni, Matthieu P; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M

    2016-01-01

    Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514

  14. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  15. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    OpenAIRE

    2014-01-01

    Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM) images. However, as the number of the images inc...

  16. Automated parameterisation for multi-scale image segmentation on multiple layers

    Science.gov (United States)

    Drăguţ, L.; Csillik, O.; Eisank, C.; Tiede, D.

    2014-01-01

    We introduce a new automated approach to parameterising multi-scale image segmentation of multiple layers, and we implemented it as a generic tool for the eCognition® software. This approach relies on the potential of the local variance (LV) to detect scale transitions in geospatial data. The tool detects the number of layers added to a project and segments them iteratively with a multiresolution segmentation algorithm in a bottom-up approach, where the scale factor in the segmentation, namely, the scale parameter (SP), increases with a constant increment. The average LV value of the objects in all of the layers is computed and serves as a condition for stopping the iterations: when a scale level records an LV value that is equal to or lower than the previous value, the iteration ends, and the objects segmented in the previous level are retained. Three orders of magnitude of SP lags produce a corresponding number of scale levels. Tests on very high resolution imagery provided satisfactory results for generic applicability. The tool has a significant potential for enabling objectivity and automation of GEOBIA analysis. PMID:24748723

  17. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    Science.gov (United States)

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.

    2007-01-01

    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152

  18. Semi-automated procedures for shoreline extraction using single RADARSAT-1 SAR image

    Science.gov (United States)

    Al Fugura, A.'kif; Billa, Lawal; Pradhan, Biswajeet

    2011-12-01

    Coastline identification is important for surveying and mapping reasons. Coastline serves as the basic point of reference and is used on nautical charts for navigation purposes. Its delineation has become crucial and more important in the wake of the many recent earthquakes and tsunamis resulting in complete change and redraw of some shorelines. In a tropical country like Malaysia, presence of cloud cover hinders the application of optical remote sensing data. In this study a semi-automated technique and procedures are presented for shoreline delineation from RADARSAT-1 image. A scene of RADARSAT-1 satellite image was processed using enhanced filtering technique to identify and extract the shoreline coast of Kuala Terengganu, Malaysia. RADSARSAT image has many advantages over the optical data because of its ability to penetrate cloud cover and its night sensing capabilities. At first, speckles were removed from the image by using Lee sigma filter which was used to reduce random noise and to enhance the image and discriminate the boundary between land and water. The results showed an accurate and improved extraction and delineation of the entire coastline of Kuala Terrenganu. The study demonstrated the reliability of the image averaging filter in reducing random noise over the sea surface especially near the shoreline. It enhanced land-water boundary differentiation, enabling better delineation of the shoreline. Overall, the developed techniques showed the potential of radar imagery for accurate shoreline mapping and will be useful for monitoring shoreline changes during high and low tides as well as shoreline erosion in a tropical country like Malaysia.

  19. Prospective image registration for automated scan prescription of follow-up knee images in quantitative studies.

    Science.gov (United States)

    Goldenstein, Janet; Schooler, Joseph; Crane, Jason C; Ozhinsky, Eugene; Pialat, Jean-Baptiste; Carballido-Gamio, Julio; Majumdar, Sharmila

    2011-06-01

    Consistent scan prescription for MRI of the knee is very important for accurate comparison of images in a longitudinal study. However, consistent scan region selection is difficult due to the complexity of the knee joint. We propose a novel method for registering knee images using a mutual information registration algorithm to align images in a baseline and follow-up exam. The output of the registration algorithm, three translations and three Euler angles, is then used to redefine the region to be imaged and acquire an identical oblique imaging volume in the follow-up exam as in the baseline. This algorithm is robust to articulation of the knee and anatomical abnormalities due to disease (e.g., osteophytes). The registration method is performed only on the distal femur and is not affected by the proximal tibia or soft tissues. We have incorporated this approach in a clinical MR system and have demonstrated its utility in automatically obtaining consistent scan regions between baseline and follow-up examinations, thus improving the precision of quantitative evaluation of cartilage. Results show an improvement with prospective registration in the coefficient of variation for cartilage thickness, cartilage volume and T2 relaxation measurements. PMID:21546186

  20. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Science.gov (United States)

    Collette, R.; King, J.; Buesch, C.; Keiser, D. D.; Williams, W.; Miller, B. D.; Schulthess, J.

    2016-07-01

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program.

  1. Feasibility of fully automated detection of fiducial markers implanted into the prostate using electronic portal imaging: A comparison of methods

    International Nuclear Information System (INIS)

    Purpose: To investigate the feasibility of fully automated detection of fiducial markers implanted into the prostate using portal images acquired with an electronic portal imaging device. Methods and Materials: We have made a direct comparison of 4 different methods (2 template matching-based methods, a method incorporating attenuation and constellation analyses and a cross correlation method) that have been published in the literature for the automatic detection of fiducial markers. The cross-correlation technique requires a-priory information from the portal images, therefore the technique is not fully automated for the first treatment fraction. Images of 7 patients implanted with gold fiducial markers (8 mm in length and 1 mm in diameter) were acquired before treatment (set-up images) and during treatment (movie images) using 1MU and 15MU per image respectively. Images included: 75 anterior (AP) and 69 lateral (LAT) set-up images and 51 AP and 83 LAT movie images. Using the different methods described in the literature, marker positions were automatically identified. Results: The method based upon cross correlation techniques gave the highest percentage detection success rate of 99% (AP) and 83% (LAT) set-up (1MU) images. The methods gave detection success rates of less than 91% (AP) and 42% (LAT) set-up images. The amount of a-priory information used and how it affects the way the techniques are implemented, is discussed. Conclusions: Fully automated marker detection in set-up images for the first treatment fraction is unachievable using these methods and that using cross-correlation is the best technique for automatic detection on subsequent radiotherapy treatment fractions

  2. Three-dimensional semi-automated segmentation of carotid atherosclerosis from three-dimensional ultrasound images

    Science.gov (United States)

    Ukwatta, E.; Awad, J.; Buchanan, D.; Parraga, G.; Fenster, A.

    2012-03-01

    Three-dimensional ultrasound (3DUS) provides non-invasive and precise measurements of carotid atherosclerosis that directly reflects arterial wall abnormalities that are thought to be related to stroke risk. Here we describe a threedimensional segmentation method based on the sparse field level set method to automate the segmentation of the mediaadventitia (MAB) and lumen-intima (LIB) boundaries of the common carotid artery from 3DUS images. To initiate the process, an expert chooses four anchor points on each boundary on a subset of transverse slices that are orthogonal to the axis of the artery. An initial surface is generated using the anchor points as initial guess for the segmentation. The MAB is segmented first using five energies: length minimization energy, local region-based energy, edge-based energy, anchor point-based energy, and local smoothness energy. Five energies are also used for the LIB segmentation: length minimization energy, local region-based energy, global region-based energy, anchor point-based energy, and boundary separation-based energy. The algorithm was evaluated with respect to manual segmentations on a slice-by-slice basis using 15 3DUS images. To generate results in this paper, inter-slice distance of 2 mm is used for the initialization. For the MAB and LIB segmentations, our method yielded Dice coefficients of more than 92% and sub-millimeter values for mean and maximum absolute distance errors. Our method also yielded a vessel wall volume error of 7.1% +/- 3.4%. The realization of a semi-automated algorithm will aid in the translation of 3DUS measurements to clinical research for the rapid, non-invasive, and economical monitoring of atherosclerotic disease.

  3. New automated image analysis method for the assessment of Ki-67 labeling index in meningiomas.

    Directory of Open Access Journals (Sweden)

    Wielisław Papierz

    2010-05-01

    Full Text Available Many studies have emphasised the importance of Ki-67 labeling index (LI as the proliferation marker in meningiomas. Several authors confirmed, that Ki-67 LI has prognostic significance and correlates with likelihood of tumour recurrences. These observations were widely accepted by pathologists, but up till now no standard method for Ki-67 LI assessment was developed and introduced for the diagnostic pathology. In this paper we present a new computerised system for automated Ki-67 LI estimation in meningiomas as an aid for histological grading of meningiomas and potential standard method of Ki-67 LI assessment. We also discuss the concordance of Ki-67 LI results obtained by presented computerized system and expert pathologist, as well as possible pitfalls and mistakes in automated counting of immunopositive or negative cells. For the quantitative evaluation of digital images of meningiomas the designed software uses an algorithm based on mathematical description of cell morphology. This solution acts together with the Support Vector Machine (SVM used in the classification mode for the recognition of immunoreactivity of cells. The applied sequential thresholding simulated well the human process of cell recognition. The same digital images of randomly selected tumour areas were parallelly analysed by computer and blindly by two expert pathologists. Ki-67 labeling indices were estimated and the results compared. The mean relative discrepancy between the levels of Ki-67 LI by our system and by the human expert did not exceed 14% in all investigated cases. These preliminary results suggest that the designed software could be an useful tool supporting the diagnostic digital pathology. However, more extended studies are needed for approval of this suggestion.

  4. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    Science.gov (United States)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  5. Comparison of manual and semi-automated delineation of regions of interest for radioligand PET imaging analysis

    International Nuclear Information System (INIS)

    As imaging centers produce higher resolution research scans, the number of man-hours required to process regional data has become a major concern. Comparison of automated vs. manual methodology has not been reported for functional imaging. We explored validation of using automation to delineate regions of interest on positron emission tomography (PET) scans. The purpose of this study was to ascertain improvements in image processing time and reproducibility of a semi-automated brain region extraction (SABRE) method over manual delineation of regions of interest (ROIs). We compared 2 sets of partial volume corrected serotonin 1a receptor binding potentials (BPs) resulting from manual vs. semi-automated methods. BPs were obtained from subjects meeting consensus criteria for frontotemporal degeneration and from age- and gender-matched healthy controls. Two trained raters provided each set of data to conduct comparisons of inter-rater mean image processing time, rank order of BPs for 9 PET scans, intra- and inter-rater intraclass correlation coefficients (ICC), repeatability coefficients (RC), percentages of the average parameter value (RM%), and effect sizes of either method. SABRE saved approximately 3 hours of processing time per PET subject over manual delineation (p < .001). Quality of the SABRE BP results was preserved relative to the rank order of subjects by manual methods. Intra- and inter-rater ICC were high (>0.8) for both methods. RC and RM% were lower for the manual method across all ROIs, indicating less intra-rater variance across PET subjects' BPs. SABRE demonstrated significant time savings and no significant difference in reproducibility over manual methods, justifying the use of SABRE in serotonin 1a receptor radioligand PET imaging analysis. This implies that semi-automated ROI delineation is a valid methodology for future PET imaging analysis

  6. Automated Image Analysis for Determination of Antibody Titers Against Occupational Bacterial Antigens Using Indirect Immunofluorescence.

    Science.gov (United States)

    Brauner, Paul; Jäckel, Udo

    2016-06-01

    Employees who are exposed to high concentrations of microorganisms in bioaerosols frequently suffer from respiratory disorders. However, etiology and in particular potential roles of microorganisms in pathogenesis still need to be elucidated. Thus, determination of employees' antibody titers against specific occupational microbial antigens may lead to identification of potentially harmful species. Since indirect immunofluorescence (IIF) is easy to implement, we used this technique to analyze immunoreactions in human sera. In order to address disadvantageous inter-observer variations as well as the absence of quantifiable fluorescence data in conventional titer determination by eye, we specifically developed a software tool for automated image analysis. The 'Fluorolyzer' software is able to reliably quantify fluorescence intensities of antibody-bound bacterial cells on digital images. Subsequently, fluorescence values of single cells have been used to calculate non-discrete IgG titers. We tested this approach on multiple bacterial workplace isolates and determined titers in sera from 20 volunteers. Furthermore, we compared image-based results with the conventional manual readout and found significant correlation as well as statistically confirmed reproducibility. In conclusion, we successfully employed 'Fluorolyzer' for determination of titers against various bacterial species and demonstrated its applicability as a useful tool for reliable and efficient analysis of immune response toward occupational exposure to bioaerosols. PMID:27026659

  7. Combined multiphoton imaging and automated functional enucleation of porcine oocytes using femtosecond laser pulses

    Science.gov (United States)

    Kuetemeyer, Kai; Lucas-Hahn, Andrea; Petersen, Bjoern; Lemme, Erika; Hassel, Petra; Niemann, Heiner; Heisterkamp, Alexander

    2010-07-01

    Since the birth of ``Dolly'' as the first mammal cloned from a differentiated cell, somatic cell cloning has been successful in several mammalian species, albeit at low success rates. The highly invasive mechanical enucleation step of a cloning protocol requires sophisticated, expensive equipment and considerable micromanipulation skill. We present a novel noninvasive method for combined oocyte imaging and automated functional enucleation using femtosecond (fs) laser pulses. After three-dimensional imaging of Hoechst-labeled porcine oocytes by multiphoton microscopy, our self-developed software automatically identified the metaphase plate. Subsequent irradiation of the metaphase chromosomes with the very same laser at higher pulse energies in the low-density-plasma regime was used for metaphase plate ablation (functional enucleation). We show that fs laser-based functional enucleation of porcine oocytes completely inhibited the parthenogenetic development without affecting the oocyte morphology. In contrast, nonirradiated oocytes were able to develop parthenogenetically to the blastocyst stage without significant differences to controls. Our results indicate that fs laser systems have great potential for oocyte imaging and functional enucleation and may improve the efficiency of somatic cell cloning.

  8. CT and MRI derived source localization error in a custom prostate phantom using automated image coregistration

    International Nuclear Information System (INIS)

    Dosimetric evaluation of completed brachytherapy implant procedures is crucial in developing proper technique. Additionally, accurate dosimetry may be useful in predicting the success of an implant. Accurate definition of the prostate gland and localization of the implanted radioactive sources are critical to attain meaningful dosimetric data. MRI is recognized as a superior imaging modality in delineating the prostate gland. More importantly, MRI can be used for source localization in postimplant prostates. However, the MRI derived source localization error bears further investigation. We present a useful tool in determining the source localization error as well as permitting the fusion, or coregistration, of selected data from multiple imaging modalities. We constructed a custom prostate phantom of hydrocolloid material precisely implanted with I-125 seeds. We obtained CT, the accepted modality, and MRI scans of the phantom. Subsequently, we developed an automated algorithm that employs a sequential translation of data sets to initially maximize coregistration and minimize error between data sets. This was followed by a noniterative solution for the necessary rotation transformation matrix using the Orthogonal Procrustes Solution. We applied this algorithm to CT and MRI scans of the custom phantom. CT derived source locations had source localization errors of 1.59 mm±0.64. MRI derived source locations produced similar results (1.67 mm±0.76). These errors may be attributed to the image digitization process

  9. New technologies for automated cell counting based on optical image analysis ;The Cellscreen'.

    Science.gov (United States)

    Brinkmann, Marlies; Lütkemeyer, Dirk; Gudermann, Frank; Lehmann, Jürgen

    2002-01-01

    A prototype of a newly developed apparatus for measuring cell growth characteristics of suspension cells in micro titre plates over a period of time was examined. Fully automated non-invasive cell counts in small volume cultivation vessels, e.g. 96 well plates, were performed with the Cellscreen system by Innovatis AG, Germany. The system automatically generates microscopic images of suspension cells which had sedimented on the base of the well plate. The total cell number and cell geometry was analysed without staining or sampling using the Cedex image recognition technology. Thus, time course studies of cell growth with the identical culture became possible. Basic parameters like the measurement range, the minimum number of images which were required for statistically reliable results, as well as the influence of the measurement itself and the effect of evaporation in 96 well plates on cell proliferation were determined. A comparison with standard methods including the influence of the cultured volume per well (25 mul to 200 mul) on cell growth was performed. Furthermore, the toxic substances ammonia, lactate and butyrate were used to show that the Cellscreen system is able to detect even the slightest changes in the specific growth rate. PMID:19003093

  10. Software feature enhancements for automated scanning of multiple surface geometry objects using ultrasonic imaging system

    International Nuclear Information System (INIS)

    Electronics Division, BARC in association with Metallic Fuels Division has developed an Ultrasonic Imaging System suitable for automated inspection of metallic objects with multiple surface geometry. The electronics hardware and application software for this system has been developed by Electronics Division and the design and development of the mechanical scanner was done by Metallic Fuels Division, BARC. The scanner has been successfully interfaced with the high-resolution ultrasonic imaging system (ULTIMA-200SP). A very significant feature of the ULTIMA-200SP system is the application software which performs various tasks of controlling various motors of scanner in addition to data acquisition, processing, analysis and information display. All these tasks must be carried out in a well synchronized manner for generating high resolution B Scan and C Scan images of test objects. In order to meet stringent requirements of the user, ULTIMA software has been extensively upgraded with new advanced features viz. Fast (coarse) and Slow (fine) scan for the speed optimization, Scanning of Cuboids and Cylindrical objects in the user defined region of interest, 3D view of the C-Scan, gray level, dual or multiple color plot in B-Scan, C-Scan and 3D views. This paper describes the advanced Windows based application software package developed at ED, BARC and highlights its salient features along with a brief description of the system hardware and relevant information. (author)

  11. Semi-automated porosity identification from thin section images using image analysis and intelligent discriminant classifiers

    Science.gov (United States)

    Ghiasi-Freez, Javad; Soleimanpour, Iman; Kadkhodaie-Ilkhchi, Ali; Ziaii, Mansur; Sedighi, Mahdi; Hatampour, Amir

    2012-08-01

    Identification of different types of porosity within a reservoir rock is a functional parameter for reservoir characterization since various pore types play different roles in fluid transport and also, the pore spaces determine the fluid storage capacity of the reservoir. The present paper introduces a model for semi-automatic identification of porosity types within thin section images. To get this goal, a pattern recognition algorithm is followed. Firstly, six geometrical shape parameters of sixteen largest pores of each image are extracted using image analysis techniques. The extracted parameters and their corresponding pore types of 294 pores are used for training two intelligent discriminant classifiers, namely linear and quadratic discriminant analysis. The trained classifiers take the geometrical features of the pores to identify the type and percentage of five types of porosity, including interparticle, intraparticle, oomoldic, biomoldic, and vuggy in each image. The accuracy of classifiers is determined from two standpoints. Firstly, the predicted and measured percentages of each type of porosity are compared with each other. The results indicate reliable performance for predicting percentage of each type of porosity. In the second step, the precisions of classifiers for categorizing the pore spaces are analyzed. The classifiers also took a high acceptance score when used for individual recognition of pore spaces. The proposed methodology is a further promising application for petroleum geologists allowing statistical study of pore types in a rapid and accurate way.

  12. Automated optical image correlation to constrain dynamics of slow-moving landslides

    Science.gov (United States)

    Mackey, B. H.; Roering, J. J.; Lamb, M. P.

    2011-12-01

    Large, slow-moving landslides can dominate sediment flux from mountainous terrain, yet their long-term spatio-temporal behavior at the landscape scale is not well understood. Movement can be inconspicuous, episodic, persist for decades, and is challenging and time consuming to quantify using traditional methods such as stereo photogrammetry or field surveying. In the absence of large datasets documenting the movement of slow-moving landslides, we are challenged to isolate the key variables that control their movement and evolution. This knowledge gap hampers our understanding of landslide processes, landslide hazard, sediment budgets, and landscape evolution. Here we document the movement of numerous slow-moving landslides along the Eel River, northern California. These glacier-like landslides (earthflows) move seasonally (typically 1-2 m/yr), with minimal surface deformation, such that scattered shrubs can grow on the landslide surface for decades. Previous work focused on manually tracking the position of individual features (trees, rocks) on photos and LiDAR-derived digital topography to identify the extent of landslide activity. Here, we employ sub-pixel change detection software (COSI-Corr) to generate automated maps of landslide displacement by correlating successive orthorectified photos. Through creation of a detailed multi-temporal deformation field across the entire landslide surface, COSI-Corr is able to delineate zones of movement, quantify displacement, and identify domains of flow convergence and divergence. The vegetation and fine-scale landslide morphology provide excellent texture for automated comparison between successive orthorectified images, although decorrelation can occur in areas where translation between images is greater than the specified search window, or where intense ground deformation or vegetation change occurs. We automatically detected movement on dozens of active landslides (with landslide extent and displacement confirmed by

  13. Myocardial Perfusion: Near-automated Evaluation from Contrast-enhanced MR Images Obtained at Rest and during Vasodilator Stress

    OpenAIRE

    Tarroni, Giacomo; Corsi, Cristiana; Antkowiak, Patrick F; Veronesi, Federico; Kramer, Christopher M.; Epstein, Frederick H; Walter, James; Lamberti, Claudio; Lang, Roberto M.; Mor-Avi, Victor; Patel, Amit R

    2012-01-01

    This study demonstrated that despite the extreme dynamic nature of contrast-enhanced cardiac MR image sequences and respiratory motion, near-automated frame-by-frame detection of myocardial segments and high-quality quantification of myocardial contrast is feasible both at rest and during vasodilator stress.

  14. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); Rahmat, K. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); University Malaya, Biomedical Imaging Department, Kuala Lumpur (Malaysia); Ariffin, H. [University of Malaya, Department of Paediatrics, Faculty of Medicine, Kuala Lumpur (Malaysia)

    2012-07-15

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  15. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    International Nuclear Information System (INIS)

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  16. Automated high-throughput assessment of prostate biopsy tissue using infrared spectroscopic chemical imaging

    Science.gov (United States)

    Bassan, Paul; Sachdeva, Ashwin; Shanks, Jonathan H.; Brown, Mick D.; Clarke, Noel W.; Gardner, Peter

    2014-03-01

    Fourier transform infrared (FT-IR) chemical imaging has been demonstrated as a promising technique to complement histopathological assessment of biomedical tissue samples. Current histopathology practice involves preparing thin tissue sections and staining them using hematoxylin and eosin (H&E) after which a histopathologist manually assess the tissue architecture under a visible microscope. Studies have shown that there is disagreement between operators viewing the same tissue suggesting that a complementary technique for verification could improve the robustness of the evaluation, and improve patient care. FT-IR chemical imaging allows the spatial distribution of chemistry to be rapidly imaged at a high (diffraction-limited) spatial resolution where each pixel represents an area of 5.5 × 5.5 μm2 and contains a full infrared spectrum providing a chemical fingerprint which studies have shown contains the diagnostic potential to discriminate between different cell-types, and even the benign or malignant state of prostatic epithelial cells. We report a label-free (i.e. no chemical de-waxing, or staining) method of imaging large pieces of prostate tissue (typically 1 cm × 2 cm) in tens of minutes (at a rate of 0.704 × 0.704 mm2 every 14.5 s) yielding images containing millions of spectra. Due to refractive index matching between sample and surrounding paraffin, minimal signal processing is required to recover spectra with their natural profile as opposed to harsh baseline correction methods, paving the way for future quantitative analysis of biochemical signatures. The quality of the spectral information is demonstrated by building and testing an automated cell-type classifier based upon spectral features.

  17. Hyper-Cam automated calibration method for continuous hyperspectral imaging measurements

    Science.gov (United States)

    Gagnon, Jean-Philippe; Habte, Zewdu; George, Jacks; Farley, Vincent; Tremblay, Pierre; Chamberland, Martin; Romano, Joao; Rosario, Dalton

    2010-04-01

    The midwave and longwave infrared regions of the electromagnetic spectrum contain rich information which can be captured by hyperspectral sensors thus enabling enhanced detection of targets of interest. A continuous hyperspectral imaging measurement capability operated 24/7 over varying seasons and weather conditions permits the evaluation of hyperspectral imaging for detection of different types of targets in real world environments. Such a measurement site was built at Picatinny Arsenal under the Spectral and Polarimetric Imagery Collection Experiment (SPICE), where two Hyper-Cam hyperspectral imagers are installed at the Precision Armament Laboratory (PAL) and are operated autonomously since Fall of 2009. The Hyper-Cam are currently collecting a complete hyperspectral database that contains the MWIR and LWIR hyperspectral measurements of several targets under day, night, sunny, cloudy, foggy, rainy and snowy conditions. The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis capabilities using a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The MWIR version covers the 3 to 5 μm spectral range and the LWIR version covers the 8 to 12 μm spectral range. This paper describes the automated operation of the two Hyper-Cam sensors being used in the SPICE data collection. The Reveal Automation Control Software (RACS) developed collaboratively between Telops, ARDEC, and ARL enables flexible operating parameters and autonomous calibration. Under the RACS software, the Hyper-Cam sensors can autonomously calibrate itself using their internal blackbody targets, and the calibration events are initiated by user defined time intervals and on internal beamsplitter temperature monitoring. The RACS software is the first software developed for

  18. Automated melanoma detection with a novel multispectral imaging system: results of a prospective study

    International Nuclear Information System (INIS)

    The aim of this research was to evaluate the performance of a new spectroscopic system in the diagnosis of melanoma. This study involves a consecutive series of 1278 patients with 1391 cutaneous pigmented lesions including 184 melanomas. In an attempt to approach the 'real world' of lesion population, a further set of 1022 not excised clinically reassuring lesions was also considered for analysis. Each lesion was imaged in vivo by a multispectral imaging system. The system operates at wavelengths between 483 and 950 nm by acquiring 15 images at equally spaced wavelength intervals. From the images, different lesion descriptors were extracted related to the colour distribution and morphology of the lesions. Data reduction techniques were applied before setting up a neural network classifier designed to perform automated diagnosis. The data set was randomly divided into three sets: train (696 lesions, including 90 melanomas) and verify (348 lesions, including 53 melanomas) for the instruction of a proper neural network, and an independent test set (347 lesions, including 41 melanomas). The neural network was able to discriminate between melanomas and non-melanoma lesions with a sensitivity of 80.4% and a specificity of 75.6% in the 1391 histologized cases data set. No major variations were found in classification scores when train, verify and test subsets were separately evaluated. Following receiver operating characteristic (ROC) analysis, the resulting area under the curve was 0.85. No significant differences were found among areas under train, verify and test set curves, supporting the good network ability to generalize for new cases. In addition, specificity and area under ROC curve increased up to 90% and 0.90, respectively, when the additional set of 1022 lesions without histology was added to the test set. Our data show that performance of an automated system is greatly population dependent, suggesting caution in the comparison with results reported in the

  19. An automated image cytometry system for monitoring DNA ploidy and other cell features of radiotherapy and chemotherapy patients

    International Nuclear Information System (INIS)

    DNA content and distribution in cell nuclei were studied in samples of fine-needle aspiration (FNA) from 27 locally advanced breast and head and neck cancers in two going randomized trials that compared accelerated fractionation to standard fractionation radiation in locally advanced breast cancer and head and neck cancer. Two image cytometry methods were compared: a new, fully automated DNA image cytometry system (AIC) and a conventional image cytometry (CIC) system with manual selection, focusing, and segmentation of cells. The results of both techniques were compared on the basis of DNA histogram parameters including DNA index (DI), mean DNA values (MDV), and Auer's DNA histogram patterns. An excellent correlation was achieved between the two imaging techniques in terms of DI (r=0.985, p<0.001) and MDV (r=0.951, p<0.001) as well as between Auer's histogram patterns, where both methods agreed completely. It was concluded in these analyses that the two image cytometry methods were equivalent. However, the AIC offered an advantage by scanning samples in a fully automated way, which represented significant time saving for cytopathologists working with the system, as well as a larger number of cells used in the automated analysis. With the automated image cytometer, 500 relevant cells were collected and analyzed in about 10 minutes, where with the interactive (manual) method, it took typically an hour to collect and analyze only about 250 cells. Seventeen samples were sufficient for flow analysis. Image cytometry and flow cytometry showed good agreement in DI determination; however, three cases reported as diploid by flow cytometry were found to be aneuploid by image cytometry techniques. (author)

  20. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    accuracy for single 2D images being higher than 90% for the first time. In particular, the classification accuracy for the easily confused endomembrane compartments (endoplasmic reticulum, Golgi, endosomes, lysosomes was improved by 5–15%. We achieved further improvements when classification was conducted on image sets rather than on individual cell images. Conclusions The availability of accurate, fast, automated classification systems for protein location patterns in conjunction with high throughput fluorescence microscope imaging techniques enables a new subfield of proteomics, location proteomics. The accuracy and sensitivity of this approach represents an important alternative to low-resolution assignments by curation or sequence-based prediction.